Jan 31 05:21:13 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 05:21:13 crc restorecon[4764]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:13 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 05:21:14 crc restorecon[4764]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 05:21:14 crc restorecon[4764]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 05:21:15 crc kubenswrapper[5050]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 05:21:15 crc kubenswrapper[5050]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 05:21:15 crc kubenswrapper[5050]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 05:21:15 crc kubenswrapper[5050]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 05:21:15 crc kubenswrapper[5050]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 05:21:15 crc kubenswrapper[5050]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.464433 5050 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474755 5050 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474793 5050 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474802 5050 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474811 5050 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474819 5050 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474828 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474837 5050 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474845 5050 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474852 5050 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474860 5050 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474868 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474876 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474884 5050 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474894 5050 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474905 5050 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474914 5050 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474922 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474930 5050 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474938 5050 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474945 5050 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474986 5050 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.474998 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475008 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475016 5050 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475024 5050 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475032 5050 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475040 5050 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475047 5050 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475055 5050 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475063 5050 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475085 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475096 5050 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475106 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475115 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475124 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475131 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475139 5050 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475146 5050 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475154 5050 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475163 5050 feature_gate.go:330] unrecognized feature gate: Example Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475171 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475181 5050 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475192 5050 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475200 5050 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475211 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475219 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475227 5050 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475235 5050 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475242 5050 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475250 5050 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475257 5050 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475265 5050 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475273 5050 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475284 5050 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475293 5050 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475302 5050 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475310 5050 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475319 5050 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475327 5050 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475336 5050 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475344 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475352 5050 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475360 5050 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475368 5050 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475376 5050 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475383 5050 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475391 5050 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475399 5050 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475406 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475414 5050 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.475423 5050 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475604 5050 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475627 5050 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475644 5050 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475655 5050 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475667 5050 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475676 5050 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475688 5050 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475699 5050 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475709 5050 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475718 5050 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475728 5050 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475738 5050 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475747 5050 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475757 5050 flags.go:64] FLAG: --cgroup-root="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475766 5050 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475775 5050 flags.go:64] FLAG: --client-ca-file="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475784 5050 flags.go:64] FLAG: --cloud-config="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475793 5050 flags.go:64] FLAG: --cloud-provider="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475802 5050 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475814 5050 flags.go:64] FLAG: --cluster-domain="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475823 5050 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475832 5050 flags.go:64] FLAG: --config-dir="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475841 5050 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475851 5050 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475862 5050 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475870 5050 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475881 5050 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475890 5050 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475900 5050 flags.go:64] FLAG: --contention-profiling="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475908 5050 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475917 5050 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475927 5050 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475936 5050 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475947 5050 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.475995 5050 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476006 5050 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476017 5050 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476026 5050 flags.go:64] FLAG: --enable-server="true" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476035 5050 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476049 5050 flags.go:64] FLAG: --event-burst="100" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476058 5050 flags.go:64] FLAG: --event-qps="50" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476067 5050 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476076 5050 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476085 5050 flags.go:64] FLAG: --eviction-hard="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476096 5050 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476105 5050 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476114 5050 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476124 5050 flags.go:64] FLAG: --eviction-soft="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476132 5050 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476142 5050 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476151 5050 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476160 5050 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476169 5050 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476178 5050 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476187 5050 flags.go:64] FLAG: --feature-gates="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476197 5050 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476206 5050 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476216 5050 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476226 5050 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476235 5050 flags.go:64] FLAG: --healthz-port="10248" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476244 5050 flags.go:64] FLAG: --help="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476254 5050 flags.go:64] FLAG: --hostname-override="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476262 5050 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476272 5050 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476282 5050 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476291 5050 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476300 5050 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476309 5050 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476318 5050 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476326 5050 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476335 5050 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476344 5050 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476353 5050 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476362 5050 flags.go:64] FLAG: --kube-reserved="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476371 5050 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476379 5050 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476388 5050 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476397 5050 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476406 5050 flags.go:64] FLAG: --lock-file="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476414 5050 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476423 5050 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476433 5050 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476446 5050 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476456 5050 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476465 5050 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476474 5050 flags.go:64] FLAG: --logging-format="text" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476483 5050 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476492 5050 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476501 5050 flags.go:64] FLAG: --manifest-url="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476509 5050 flags.go:64] FLAG: --manifest-url-header="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476520 5050 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476530 5050 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476540 5050 flags.go:64] FLAG: --max-pods="110" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476549 5050 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476559 5050 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476570 5050 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476579 5050 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476588 5050 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476597 5050 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476606 5050 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476626 5050 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476636 5050 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476644 5050 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476653 5050 flags.go:64] FLAG: --pod-cidr="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476662 5050 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476674 5050 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476683 5050 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476692 5050 flags.go:64] FLAG: --pods-per-core="0" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476701 5050 flags.go:64] FLAG: --port="10250" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476710 5050 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476719 5050 flags.go:64] FLAG: --provider-id="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476728 5050 flags.go:64] FLAG: --qos-reserved="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476737 5050 flags.go:64] FLAG: --read-only-port="10255" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476746 5050 flags.go:64] FLAG: --register-node="true" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476805 5050 flags.go:64] FLAG: --register-schedulable="true" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476816 5050 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476831 5050 flags.go:64] FLAG: --registry-burst="10" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476840 5050 flags.go:64] FLAG: --registry-qps="5" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476849 5050 flags.go:64] FLAG: --reserved-cpus="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476865 5050 flags.go:64] FLAG: --reserved-memory="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476876 5050 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476885 5050 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476895 5050 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476904 5050 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476912 5050 flags.go:64] FLAG: --runonce="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476921 5050 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476930 5050 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476939 5050 flags.go:64] FLAG: --seccomp-default="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476982 5050 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.476996 5050 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477007 5050 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477019 5050 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477030 5050 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477039 5050 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477047 5050 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477056 5050 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477065 5050 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477075 5050 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477084 5050 flags.go:64] FLAG: --system-cgroups="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477092 5050 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477106 5050 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477115 5050 flags.go:64] FLAG: --tls-cert-file="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477124 5050 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477135 5050 flags.go:64] FLAG: --tls-min-version="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477144 5050 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477153 5050 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477162 5050 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477172 5050 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477182 5050 flags.go:64] FLAG: --v="2" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477193 5050 flags.go:64] FLAG: --version="false" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477204 5050 flags.go:64] FLAG: --vmodule="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477215 5050 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.477224 5050 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477422 5050 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477432 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477442 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477451 5050 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477459 5050 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477470 5050 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477480 5050 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477489 5050 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477497 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477505 5050 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477513 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477520 5050 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477528 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477536 5050 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477544 5050 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477552 5050 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477560 5050 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477567 5050 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477575 5050 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477582 5050 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477590 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477598 5050 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477606 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477613 5050 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477621 5050 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477629 5050 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477637 5050 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477645 5050 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477653 5050 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477661 5050 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477668 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477676 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477684 5050 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477691 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477699 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477707 5050 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477714 5050 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477723 5050 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477732 5050 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477739 5050 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477747 5050 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477755 5050 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477763 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477770 5050 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477778 5050 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477785 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477793 5050 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477804 5050 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477813 5050 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477822 5050 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477830 5050 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477839 5050 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477847 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477855 5050 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477866 5050 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477876 5050 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477885 5050 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477894 5050 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477904 5050 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477912 5050 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477922 5050 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477932 5050 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477941 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477975 5050 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477984 5050 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.477992 5050 feature_gate.go:330] unrecognized feature gate: Example Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.478029 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.478037 5050 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.478045 5050 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.478053 5050 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.478061 5050 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.478073 5050 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.492131 5050 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.492188 5050 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492325 5050 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492351 5050 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492366 5050 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492377 5050 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492387 5050 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492396 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492405 5050 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492414 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492423 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492432 5050 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492440 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492449 5050 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492457 5050 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492466 5050 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492474 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492483 5050 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492492 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492500 5050 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492509 5050 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492517 5050 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492526 5050 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492536 5050 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492548 5050 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492561 5050 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492573 5050 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492583 5050 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492593 5050 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492603 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492613 5050 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492622 5050 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492632 5050 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492641 5050 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492650 5050 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492659 5050 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492670 5050 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492679 5050 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492688 5050 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492697 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492706 5050 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492715 5050 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492723 5050 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492731 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492741 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492750 5050 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492758 5050 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492767 5050 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492775 5050 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492783 5050 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492791 5050 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492800 5050 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492808 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492817 5050 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492825 5050 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492833 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492841 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492850 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492860 5050 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492869 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492878 5050 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492886 5050 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492895 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492903 5050 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492911 5050 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492920 5050 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492928 5050 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492936 5050 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492945 5050 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492987 5050 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.492998 5050 feature_gate.go:330] unrecognized feature gate: Example Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493009 5050 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493022 5050 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.493040 5050 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493308 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493324 5050 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493336 5050 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493347 5050 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493357 5050 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493365 5050 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493373 5050 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493382 5050 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493391 5050 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493411 5050 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493423 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493433 5050 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493443 5050 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493452 5050 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493461 5050 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493471 5050 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493479 5050 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493489 5050 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493498 5050 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493508 5050 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493516 5050 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493525 5050 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493533 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493541 5050 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493550 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493559 5050 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493567 5050 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493576 5050 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493584 5050 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493593 5050 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493601 5050 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493609 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493618 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493629 5050 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493643 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493652 5050 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493661 5050 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493670 5050 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493678 5050 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493687 5050 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493695 5050 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493704 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493713 5050 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493724 5050 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493735 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493744 5050 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493753 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493763 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493771 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493780 5050 feature_gate.go:330] unrecognized feature gate: Example Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493790 5050 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493798 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493807 5050 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493815 5050 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493824 5050 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493833 5050 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493841 5050 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493849 5050 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493858 5050 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493869 5050 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493881 5050 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493890 5050 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493900 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493909 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493918 5050 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493927 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493936 5050 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.493945 5050 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.494007 5050 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.494019 5050 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.494032 5050 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.494052 5050 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.494352 5050 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.501382 5050 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.501521 5050 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.503458 5050 server.go:997] "Starting client certificate rotation" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.503511 5050 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.503732 5050 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-13 21:21:09.175290543 +0000 UTC Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.503883 5050 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.533053 5050 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.536060 5050 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.70:6443: connect: connection refused" logger="UnhandledError" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.538580 5050 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.559449 5050 log.go:25] "Validated CRI v1 runtime API" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.599868 5050 log.go:25] "Validated CRI v1 image API" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.601849 5050 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.609696 5050 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-05-16-48-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.609745 5050 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.638394 5050 manager.go:217] Machine: {Timestamp:2026-01-31 05:21:15.63482107 +0000 UTC m=+0.683982726 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:668e546d-c46d-479d-b853-255ef6694306 BootID:ec9182ce-0cc0-426f-b3ce-57d540740844 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:23:fb:76 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:23:fb:76 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8b:74:86 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:55:0c:68 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:61:93:37 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b6:1b:57 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:9e:1b:cc Speed:-1 Mtu:1496} {Name:eth10 MacAddress:4a:3f:5e:55:1b:84 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:06:4c:0b:4e:00:71 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.638847 5050 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.639248 5050 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.640490 5050 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.640851 5050 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.640921 5050 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.641297 5050 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.641318 5050 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.641862 5050 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.641916 5050 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.642215 5050 state_mem.go:36] "Initialized new in-memory state store" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.642371 5050 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.646476 5050 kubelet.go:418] "Attempting to sync node with API server" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.646524 5050 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.646583 5050 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.646614 5050 kubelet.go:324] "Adding apiserver pod source" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.646639 5050 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.651683 5050 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.652793 5050 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.654657 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.654797 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.70:6443: connect: connection refused" logger="UnhandledError" Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.654825 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.654915 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.70:6443: connect: connection refused" logger="UnhandledError" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.655590 5050 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.657773 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.657828 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.657844 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.657859 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.657883 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.657899 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.657913 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.657937 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.657979 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.657995 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.658044 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.658059 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.658104 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.658813 5050 server.go:1280] "Started kubelet" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.660386 5050 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.661310 5050 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.660813 5050 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 05:21:15 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.662299 5050 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.665191 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.665258 5050 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.665343 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:17:35.047133411 +0000 UTC Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.665817 5050 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.665878 5050 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.665886 5050 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.665867 5050 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.671978 5050 factory.go:153] Registering CRI-O factory Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.672017 5050 factory.go:221] Registration of the crio container factory successfully Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.672141 5050 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.672158 5050 factory.go:55] Registering systemd factory Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.672171 5050 factory.go:221] Registration of the systemd container factory successfully Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.672200 5050 factory.go:103] Registering Raw factory Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.672226 5050 manager.go:1196] Started watching for new ooms in manager Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.672495 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" interval="200ms" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.672865 5050 server.go:460] "Adding debug handlers to kubelet server" Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.672982 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.673120 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.70:6443: connect: connection refused" logger="UnhandledError" Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.672452 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.70:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fb942ec07363d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 05:21:15.658769981 +0000 UTC m=+0.707931617,LastTimestamp:2026-01-31 05:21:15.658769981 +0000 UTC m=+0.707931617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.674256 5050 manager.go:319] Starting recovery of all containers Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689067 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689148 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689180 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689207 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689232 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689256 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689280 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689303 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689331 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689357 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689380 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689404 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689429 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689456 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689519 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689542 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689568 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689594 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689617 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689639 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689673 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689698 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689737 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689771 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689794 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689819 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689849 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689873 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689897 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689920 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.689943 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690024 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690046 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690068 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690089 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690110 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690132 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690158 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690206 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690311 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690348 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690377 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690408 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690454 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690481 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690513 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690542 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690570 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690600 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690630 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690656 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690686 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690731 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690764 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690792 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690822 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690850 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690878 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690902 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690925 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.690987 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691019 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691046 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691071 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691096 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691121 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691147 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691174 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691201 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691231 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691258 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691288 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691314 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691345 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691372 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691399 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691426 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691459 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691485 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691514 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691560 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691590 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691618 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691648 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691675 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691705 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691735 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691764 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691867 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691897 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691926 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.691991 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692022 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692069 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692096 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692177 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692210 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692238 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692263 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692293 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692321 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692347 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692414 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692450 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692491 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692527 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692560 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692589 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692616 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692663 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692693 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692724 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692755 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692784 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692813 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692842 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692873 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692901 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692931 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.692999 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693027 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693047 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693069 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693089 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693110 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693131 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693150 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693170 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693189 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693208 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693227 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693248 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693274 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693301 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693328 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693366 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693391 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693417 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693442 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693502 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693527 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693575 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693603 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693628 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693654 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693678 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693704 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693729 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693755 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693782 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693812 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693842 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693871 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693901 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693928 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.693990 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694022 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694049 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694075 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694101 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694129 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694154 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694180 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694208 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694235 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694262 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694287 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694318 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694344 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694371 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694400 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694426 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694449 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694475 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694500 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694524 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694548 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694575 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694602 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694630 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694656 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694696 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694725 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694754 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694781 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694810 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694838 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694863 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.694901 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695362 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695403 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695478 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695513 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695547 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695592 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695623 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695672 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695704 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695739 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695810 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695855 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695873 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695895 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695908 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695931 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.695965 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.701172 5050 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.701363 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.701402 5050 reconstruct.go:97] "Volume reconstruction finished" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.701424 5050 reconciler.go:26] "Reconciler: start to sync state" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.713197 5050 manager.go:324] Recovery completed Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.730946 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.732750 5050 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.733472 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.733531 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.733551 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.734788 5050 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.734841 5050 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.734883 5050 state_mem.go:36] "Initialized new in-memory state store" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.734985 5050 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.735036 5050 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.735065 5050 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.735141 5050 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 05:21:15 crc kubenswrapper[5050]: W0131 05:21:15.739662 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.739766 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.70:6443: connect: connection refused" logger="UnhandledError" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.754158 5050 policy_none.go:49] "None policy: Start" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.755419 5050 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.755470 5050 state_mem.go:35] "Initializing new in-memory state store" Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.766112 5050 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.808831 5050 manager.go:334] "Starting Device Plugin manager" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.808984 5050 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.809016 5050 server.go:79] "Starting device plugin registration server" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.809699 5050 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.809734 5050 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.809992 5050 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.810205 5050 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.810244 5050 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.820129 5050 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.835376 5050 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.835478 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.836880 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.836914 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.836929 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.837172 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.838865 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.838909 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.840157 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.840188 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.840224 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.840253 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.840229 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.840401 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.840539 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.840817 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.840886 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.841587 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.841640 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.841658 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.841860 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.841896 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.841934 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.841965 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.842061 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.842092 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.842994 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.843031 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.843042 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.843128 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.843152 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.843165 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.843305 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.843497 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.843529 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.844131 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.844178 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.844194 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.844399 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.844417 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.844426 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.844462 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.844430 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.845265 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.845314 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.845335 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.873731 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" interval="400ms" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905023 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905097 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905132 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905189 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905259 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905385 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905448 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905495 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905537 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905597 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905655 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905695 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905744 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905776 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.905805 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.912006 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.913814 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.913876 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.913901 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:15 crc kubenswrapper[5050]: I0131 05:21:15.913981 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 05:21:15 crc kubenswrapper[5050]: E0131 05:21:15.914523 5050 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.70:6443: connect: connection refused" node="crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.007607 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.007672 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.007704 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.007745 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.007776 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.007805 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.007834 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.007905 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.007977 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008019 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008056 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008095 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008123 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008151 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008179 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008779 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008787 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008861 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008869 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008907 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008943 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008985 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.008946 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.009023 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.009052 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.009079 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.009084 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.009117 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.009029 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.009130 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.114738 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.116703 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.116776 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.116795 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.116843 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 05:21:16 crc kubenswrapper[5050]: E0131 05:21:16.117454 5050 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.70:6443: connect: connection refused" node="crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.177366 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.190917 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.209271 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.221839 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.230890 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 05:21:16 crc kubenswrapper[5050]: W0131 05:21:16.233773 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-68c0233c177c0b9cc3e4eb3217a2e07d818a94c08a577eae2bf79f710b67b66e WatchSource:0}: Error finding container 68c0233c177c0b9cc3e4eb3217a2e07d818a94c08a577eae2bf79f710b67b66e: Status 404 returned error can't find the container with id 68c0233c177c0b9cc3e4eb3217a2e07d818a94c08a577eae2bf79f710b67b66e Jan 31 05:21:16 crc kubenswrapper[5050]: W0131 05:21:16.236202 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-275f423c6a9bd6979462d37f4b5d6d556699b46d786caa7a762f1214dfb93304 WatchSource:0}: Error finding container 275f423c6a9bd6979462d37f4b5d6d556699b46d786caa7a762f1214dfb93304: Status 404 returned error can't find the container with id 275f423c6a9bd6979462d37f4b5d6d556699b46d786caa7a762f1214dfb93304 Jan 31 05:21:16 crc kubenswrapper[5050]: W0131 05:21:16.246796 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ebdbe3749035fc852df7b2132936da5873c7b1fc8971596b5bde18b72ee6400f WatchSource:0}: Error finding container ebdbe3749035fc852df7b2132936da5873c7b1fc8971596b5bde18b72ee6400f: Status 404 returned error can't find the container with id ebdbe3749035fc852df7b2132936da5873c7b1fc8971596b5bde18b72ee6400f Jan 31 05:21:16 crc kubenswrapper[5050]: W0131 05:21:16.260373 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4f60ed727329a2e3cd2b9eb4b1811fe3ed8e073b9357608848a968e3036b9eb9 WatchSource:0}: Error finding container 4f60ed727329a2e3cd2b9eb4b1811fe3ed8e073b9357608848a968e3036b9eb9: Status 404 returned error can't find the container with id 4f60ed727329a2e3cd2b9eb4b1811fe3ed8e073b9357608848a968e3036b9eb9 Jan 31 05:21:16 crc kubenswrapper[5050]: W0131 05:21:16.264046 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-22b2a1b7ca414855af626222357b591365f32d220b07567a34dc2f168597ac2b WatchSource:0}: Error finding container 22b2a1b7ca414855af626222357b591365f32d220b07567a34dc2f168597ac2b: Status 404 returned error can't find the container with id 22b2a1b7ca414855af626222357b591365f32d220b07567a34dc2f168597ac2b Jan 31 05:21:16 crc kubenswrapper[5050]: E0131 05:21:16.275240 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" interval="800ms" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.517912 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.520389 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.520427 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.520437 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.520497 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 05:21:16 crc kubenswrapper[5050]: E0131 05:21:16.520994 5050 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.70:6443: connect: connection refused" node="crc" Jan 31 05:21:16 crc kubenswrapper[5050]: W0131 05:21:16.603640 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:16 crc kubenswrapper[5050]: E0131 05:21:16.603768 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.70:6443: connect: connection refused" logger="UnhandledError" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.663020 5050 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.666025 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:21:59.01591533 +0000 UTC Jan 31 05:21:16 crc kubenswrapper[5050]: W0131 05:21:16.718353 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:16 crc kubenswrapper[5050]: E0131 05:21:16.718456 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.70:6443: connect: connection refused" logger="UnhandledError" Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.739362 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"275f423c6a9bd6979462d37f4b5d6d556699b46d786caa7a762f1214dfb93304"} Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.740674 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"68c0233c177c0b9cc3e4eb3217a2e07d818a94c08a577eae2bf79f710b67b66e"} Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.742565 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"22b2a1b7ca414855af626222357b591365f32d220b07567a34dc2f168597ac2b"} Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.743604 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4f60ed727329a2e3cd2b9eb4b1811fe3ed8e073b9357608848a968e3036b9eb9"} Jan 31 05:21:16 crc kubenswrapper[5050]: I0131 05:21:16.744645 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ebdbe3749035fc852df7b2132936da5873c7b1fc8971596b5bde18b72ee6400f"} Jan 31 05:21:16 crc kubenswrapper[5050]: W0131 05:21:16.756396 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:16 crc kubenswrapper[5050]: E0131 05:21:16.756493 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.70:6443: connect: connection refused" logger="UnhandledError" Jan 31 05:21:17 crc kubenswrapper[5050]: E0131 05:21:17.076446 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" interval="1.6s" Jan 31 05:21:17 crc kubenswrapper[5050]: W0131 05:21:17.163847 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:17 crc kubenswrapper[5050]: E0131 05:21:17.164014 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.70:6443: connect: connection refused" logger="UnhandledError" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.321842 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.323928 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.324019 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.324039 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.324277 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 05:21:17 crc kubenswrapper[5050]: E0131 05:21:17.324836 5050 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.70:6443: connect: connection refused" node="crc" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.656036 5050 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 05:21:17 crc kubenswrapper[5050]: E0131 05:21:17.657853 5050 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.70:6443: connect: connection refused" logger="UnhandledError" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.662658 5050 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.666979 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:55:54.902059315 +0000 UTC Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.753308 5050 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1b6dcec9ec40aed9a03eac63c87fc2e15afc66ead30ede2616563482f356a508" exitCode=0 Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.753390 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1b6dcec9ec40aed9a03eac63c87fc2e15afc66ead30ede2616563482f356a508"} Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.753493 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.754725 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.754783 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.754803 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.755575 5050 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783" exitCode=0 Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.755649 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783"} Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.755731 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.757044 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.757087 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.757105 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.758433 5050 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="944d6a7d5d890068d8b0dd96e2ec28fd0cf130fde1f6092eb13176cde30a0726" exitCode=0 Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.758510 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"944d6a7d5d890068d8b0dd96e2ec28fd0cf130fde1f6092eb13176cde30a0726"} Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.758536 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.759742 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.759796 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.759815 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.762279 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d"} Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.762323 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a"} Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.762337 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936"} Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.764108 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af" exitCode=0 Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.764142 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af"} Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.764296 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.766031 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.766080 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.766099 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.768598 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.773615 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.773804 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:17 crc kubenswrapper[5050]: I0131 05:21:17.773833 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.662395 5050 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.667624 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:56:46.439035483 +0000 UTC Jan 31 05:21:18 crc kubenswrapper[5050]: E0131 05:21:18.679291 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" interval="3.2s" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.769057 5050 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9f84e4ecfa0bd44da2b5068a836a1f208e0f49db5d54aadf7b2d6f9a2d997ed2" exitCode=0 Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.769124 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9f84e4ecfa0bd44da2b5068a836a1f208e0f49db5d54aadf7b2d6f9a2d997ed2"} Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.769247 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.771355 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.771382 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.771506 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.789989 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae"} Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.790040 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999"} Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.790058 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1"} Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.790157 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.791082 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.791108 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.791120 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.793845 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8e17846ad655c9c39b764cf8aef5df05d0f97e26aa56992971f4db04c9750ddb"} Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.793986 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.795147 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.795171 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.795182 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.810740 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016"} Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.811055 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.812425 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.812486 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.812511 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.818490 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb"} Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.818557 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa"} Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.818572 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc"} Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.818584 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace"} Jan 31 05:21:18 crc kubenswrapper[5050]: W0131 05:21:18.828286 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:18 crc kubenswrapper[5050]: W0131 05:21:18.828299 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.70:6443: connect: connection refused Jan 31 05:21:18 crc kubenswrapper[5050]: E0131 05:21:18.828379 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.70:6443: connect: connection refused" logger="UnhandledError" Jan 31 05:21:18 crc kubenswrapper[5050]: E0131 05:21:18.828396 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.70:6443: connect: connection refused" logger="UnhandledError" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.925766 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.926895 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.926927 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.926937 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:18 crc kubenswrapper[5050]: I0131 05:21:18.926976 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 05:21:18 crc kubenswrapper[5050]: E0131 05:21:18.927378 5050 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.70:6443: connect: connection refused" node="crc" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.668541 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:20:01.443721553 +0000 UTC Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.827536 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e"} Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.827631 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.828875 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.828918 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.828937 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.831078 5050 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b2ce391e182a2d1e4561d24243dcbffe1fe282bfd6559836365acdea77c40290" exitCode=0 Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.831166 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.831251 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.831290 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b2ce391e182a2d1e4561d24243dcbffe1fe282bfd6559836365acdea77c40290"} Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.831328 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.831308 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.831370 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.832130 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.832185 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.832204 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.833037 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.833072 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.833086 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.833118 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.833143 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.833161 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.833521 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.833576 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.833596 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.971170 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:19 crc kubenswrapper[5050]: I0131 05:21:19.979790 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.669149 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 21:54:15.258876732 +0000 UTC Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.795196 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.839640 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0e61a76ed8a8277321659bfeb4ba1ff0a3a8e2f2ba87f478b9a4ceb89afa59c6"} Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.839707 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cb972f9fdac10faa54b50a9219d070fa279646e9ee0e36618f77bc5dc254566c"} Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.839731 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6e56aaf7d76d5d8e22bd63b2f543c9d69526ee0f4f704fdf93f230299d0d9f21"} Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.839768 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.839844 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.839853 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.839997 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.842609 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.842681 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.842702 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.842613 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.845248 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:20 crc kubenswrapper[5050]: I0131 05:21:20.845276 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.310115 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.669933 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 14:59:01.139986793 +0000 UTC Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.850672 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"abe6452db8a61013ca3bda0a2d3a43003ee7151a412927d8bfe779796d2af708"} Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.850739 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"aae13eecc86b51cc95284d3b3fc12359d2e2568ba76275c43562b99c1527b14e"} Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.850776 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.850885 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.851004 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.852317 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.852368 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.852385 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.852621 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.852675 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.852694 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.853193 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.853276 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.853297 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:21 crc kubenswrapper[5050]: I0131 05:21:21.972203 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.028917 5050 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.128078 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.129907 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.129990 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.130010 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.130047 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.670791 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:32:41.792699048 +0000 UTC Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.853866 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.853876 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.855568 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.855619 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.855640 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.855578 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.855703 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:22 crc kubenswrapper[5050]: I0131 05:21:22.855734 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:23 crc kubenswrapper[5050]: I0131 05:21:23.156316 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 05:21:23 crc kubenswrapper[5050]: I0131 05:21:23.156484 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:23 crc kubenswrapper[5050]: I0131 05:21:23.157711 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:23 crc kubenswrapper[5050]: I0131 05:21:23.157760 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:23 crc kubenswrapper[5050]: I0131 05:21:23.157776 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:23 crc kubenswrapper[5050]: I0131 05:21:23.671365 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:51:24.600499705 +0000 UTC Jan 31 05:21:23 crc kubenswrapper[5050]: I0131 05:21:23.856327 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:23 crc kubenswrapper[5050]: I0131 05:21:23.857642 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:23 crc kubenswrapper[5050]: I0131 05:21:23.857694 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:23 crc kubenswrapper[5050]: I0131 05:21:23.857711 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:24 crc kubenswrapper[5050]: I0131 05:21:24.672279 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:17:35.795763 +0000 UTC Jan 31 05:21:25 crc kubenswrapper[5050]: I0131 05:21:25.673183 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:59:07.932095576 +0000 UTC Jan 31 05:21:25 crc kubenswrapper[5050]: E0131 05:21:25.820274 5050 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 05:21:26 crc kubenswrapper[5050]: I0131 05:21:26.520065 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:26 crc kubenswrapper[5050]: I0131 05:21:26.520287 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:26 crc kubenswrapper[5050]: I0131 05:21:26.522057 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:26 crc kubenswrapper[5050]: I0131 05:21:26.522107 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:26 crc kubenswrapper[5050]: I0131 05:21:26.522126 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:26 crc kubenswrapper[5050]: I0131 05:21:26.529062 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:26 crc kubenswrapper[5050]: I0131 05:21:26.674119 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:52:28.682046368 +0000 UTC Jan 31 05:21:26 crc kubenswrapper[5050]: I0131 05:21:26.864355 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:26 crc kubenswrapper[5050]: I0131 05:21:26.865922 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:26 crc kubenswrapper[5050]: I0131 05:21:26.866009 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:26 crc kubenswrapper[5050]: I0131 05:21:26.866022 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:27 crc kubenswrapper[5050]: I0131 05:21:27.660072 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:27 crc kubenswrapper[5050]: I0131 05:21:27.675107 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:20:17.022617723 +0000 UTC Jan 31 05:21:27 crc kubenswrapper[5050]: I0131 05:21:27.867070 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:27 crc kubenswrapper[5050]: I0131 05:21:27.868119 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:27 crc kubenswrapper[5050]: I0131 05:21:27.868183 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:27 crc kubenswrapper[5050]: I0131 05:21:27.868201 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:28 crc kubenswrapper[5050]: I0131 05:21:28.206265 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 05:21:28 crc kubenswrapper[5050]: I0131 05:21:28.206342 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 05:21:28 crc kubenswrapper[5050]: I0131 05:21:28.676188 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:43:21.416244938 +0000 UTC Jan 31 05:21:29 crc kubenswrapper[5050]: I0131 05:21:29.520470 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 05:21:29 crc kubenswrapper[5050]: I0131 05:21:29.520547 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 05:21:29 crc kubenswrapper[5050]: W0131 05:21:29.565306 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 31 05:21:29 crc kubenswrapper[5050]: I0131 05:21:29.565409 5050 trace.go:236] Trace[1867702361]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 05:21:19.563) (total time: 10002ms): Jan 31 05:21:29 crc kubenswrapper[5050]: Trace[1867702361]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (05:21:29.565) Jan 31 05:21:29 crc kubenswrapper[5050]: Trace[1867702361]: [10.002063888s] [10.002063888s] END Jan 31 05:21:29 crc kubenswrapper[5050]: E0131 05:21:29.565435 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 31 05:21:29 crc kubenswrapper[5050]: W0131 05:21:29.643249 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 31 05:21:29 crc kubenswrapper[5050]: I0131 05:21:29.643344 5050 trace.go:236] Trace[402857503]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 05:21:19.641) (total time: 10001ms): Jan 31 05:21:29 crc kubenswrapper[5050]: Trace[402857503]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (05:21:29.643) Jan 31 05:21:29 crc kubenswrapper[5050]: Trace[402857503]: [10.001591045s] [10.001591045s] END Jan 31 05:21:29 crc kubenswrapper[5050]: E0131 05:21:29.643368 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 31 05:21:29 crc kubenswrapper[5050]: I0131 05:21:29.665863 5050 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 31 05:21:29 crc kubenswrapper[5050]: I0131 05:21:29.676760 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 02:43:04.267315018 +0000 UTC Jan 31 05:21:29 crc kubenswrapper[5050]: E0131 05:21:29.895351 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.188fb942ec07363d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 05:21:15.658769981 +0000 UTC m=+0.707931617,LastTimestamp:2026-01-31 05:21:15.658769981 +0000 UTC m=+0.707931617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 05:21:30 crc kubenswrapper[5050]: I0131 05:21:30.472790 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 05:21:30 crc kubenswrapper[5050]: I0131 05:21:30.472856 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 05:21:30 crc kubenswrapper[5050]: I0131 05:21:30.481860 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 05:21:30 crc kubenswrapper[5050]: I0131 05:21:30.482032 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 05:21:30 crc kubenswrapper[5050]: I0131 05:21:30.677377 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:18:21.790942136 +0000 UTC Jan 31 05:21:30 crc kubenswrapper[5050]: I0131 05:21:30.803349 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]log ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]etcd ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/priority-and-fairness-filter ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/start-apiextensions-informers ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/start-apiextensions-controllers ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/crd-informer-synced ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/start-system-namespaces-controller ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 31 05:21:30 crc kubenswrapper[5050]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 31 05:21:30 crc kubenswrapper[5050]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/bootstrap-controller ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/start-kube-aggregator-informers ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/apiservice-registration-controller ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/apiservice-discovery-controller ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]autoregister-completion ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/apiservice-openapi-controller ok Jan 31 05:21:30 crc kubenswrapper[5050]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 31 05:21:30 crc kubenswrapper[5050]: livez check failed Jan 31 05:21:30 crc kubenswrapper[5050]: I0131 05:21:30.803441 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:21:31 crc kubenswrapper[5050]: I0131 05:21:31.677893 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:27:30.457955819 +0000 UTC Jan 31 05:21:31 crc kubenswrapper[5050]: I0131 05:21:31.768871 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 05:21:31 crc kubenswrapper[5050]: I0131 05:21:31.769278 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:31 crc kubenswrapper[5050]: I0131 05:21:31.770835 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:31 crc kubenswrapper[5050]: I0131 05:21:31.770884 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:31 crc kubenswrapper[5050]: I0131 05:21:31.770904 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:31 crc kubenswrapper[5050]: I0131 05:21:31.803201 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 05:21:31 crc kubenswrapper[5050]: I0131 05:21:31.879168 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:31 crc kubenswrapper[5050]: I0131 05:21:31.881195 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:31 crc kubenswrapper[5050]: I0131 05:21:31.881265 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:31 crc kubenswrapper[5050]: I0131 05:21:31.881303 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:31 crc kubenswrapper[5050]: I0131 05:21:31.904702 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 05:21:32 crc kubenswrapper[5050]: I0131 05:21:32.678467 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:01:07.933517797 +0000 UTC Jan 31 05:21:32 crc kubenswrapper[5050]: I0131 05:21:32.883102 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:32 crc kubenswrapper[5050]: I0131 05:21:32.884470 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:32 crc kubenswrapper[5050]: I0131 05:21:32.884520 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:32 crc kubenswrapper[5050]: I0131 05:21:32.884532 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:33 crc kubenswrapper[5050]: I0131 05:21:33.318749 5050 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 05:21:33 crc kubenswrapper[5050]: I0131 05:21:33.611517 5050 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 05:21:33 crc kubenswrapper[5050]: I0131 05:21:33.679007 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 22:01:09.134149519 +0000 UTC Jan 31 05:21:34 crc kubenswrapper[5050]: I0131 05:21:34.679817 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:05:57.074547711 +0000 UTC Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.492357 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.494027 5050 trace.go:236] Trace[1822071619]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 05:21:23.219) (total time: 12274ms): Jan 31 05:21:35 crc kubenswrapper[5050]: Trace[1822071619]: ---"Objects listed" error: 12274ms (05:21:35.493) Jan 31 05:21:35 crc kubenswrapper[5050]: Trace[1822071619]: [12.274152313s] [12.274152313s] END Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.494065 5050 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.494991 5050 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.495420 5050 trace.go:236] Trace[1034014996]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 05:21:22.503) (total time: 12992ms): Jan 31 05:21:35 crc kubenswrapper[5050]: Trace[1034014996]: ---"Objects listed" error: 12992ms (05:21:35.495) Jan 31 05:21:35 crc kubenswrapper[5050]: Trace[1034014996]: [12.992245723s] [12.992245723s] END Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.495456 5050 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.495470 5050 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.496321 5050 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.527593 5050 csr.go:261] certificate signing request csr-kv4s4 is approved, waiting to be issued Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.543530 5050 csr.go:257] certificate signing request csr-kv4s4 is issued Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.658521 5050 apiserver.go:52] "Watching apiserver" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.672648 5050 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.672930 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.673315 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.673441 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.673650 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.673698 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.673817 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.673883 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.673989 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.674034 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.674262 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.675812 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.676417 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.676516 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.676522 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.677033 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.677253 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.677362 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.677535 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.677570 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.679900 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:04:41.050373826 +0000 UTC Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.706630 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.722130 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.739188 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.771460 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.772075 5050 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.791345 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.796937 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797003 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797022 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797039 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797054 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797069 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797086 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797126 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797163 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797180 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797195 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797377 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797417 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797434 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797475 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797548 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797822 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797830 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797890 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797945 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.797937 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798024 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798048 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798081 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798104 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798127 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798148 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798165 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798359 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798367 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798383 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798406 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798411 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798425 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798449 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798623 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798648 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798681 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798716 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798913 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798933 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798760 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.799205 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798803 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798847 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.798876 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.799257 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.799284 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.799303 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.799319 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.799772 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.799332 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.799821 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.799511 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.799723 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.799790 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800410 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800417 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800428 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800630 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800671 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800453 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800741 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800779 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800785 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800801 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800841 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800859 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.800875 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801080 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801103 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801120 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801138 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801034 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801255 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801432 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801450 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801462 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801476 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801790 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801818 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801834 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.801857 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802116 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802199 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802228 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802243 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802275 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802293 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802310 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802434 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802456 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802473 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802490 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802509 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802527 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802569 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802602 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802618 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802634 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802650 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802670 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802687 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802702 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802720 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802736 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802752 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802768 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802785 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802800 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802816 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802835 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802938 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802985 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.803141 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.803195 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.802852 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.803420 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.803880 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.803969 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.803997 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.804019 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.804041 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.804063 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.805028 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.805237 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.805270 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.805331 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.805468 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.805586 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.805648 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.805817 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.805967 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806244 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806290 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806295 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806620 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806632 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806620 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806677 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806705 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806729 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806750 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806783 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806806 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806824 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806847 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806869 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806890 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806909 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806929 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806964 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.806982 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807004 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807024 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807042 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807067 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807082 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807102 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807122 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807138 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807159 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807182 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807201 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807222 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807241 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807260 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807278 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807298 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807317 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807333 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807351 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807372 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807392 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807409 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807431 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807451 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807469 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807488 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807511 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807529 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807549 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807569 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807590 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807607 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807626 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807643 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807659 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807677 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807697 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807722 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807740 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807760 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807779 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807796 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807824 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807844 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807862 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807881 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807917 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807935 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807969 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.807988 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808010 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808027 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808048 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808069 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808086 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808108 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808256 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808280 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808302 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808326 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808346 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808365 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808388 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808409 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808428 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808450 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808471 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808486 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808511 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808534 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808553 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.808569 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.809046 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.809345 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.809388 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.809412 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.809437 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.809463 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.809516 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.809694 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.809702 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.810004 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.810119 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.810427 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.810537 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.810728 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.810742 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.810973 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.811006 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.812222 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.812392 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.812504 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.812563 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.812694 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.812863 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.812918 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.813265 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.813334 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.813354 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:21:36.30973414 +0000 UTC m=+21.358895736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.809862 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.813580 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.813620 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.813668 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.813575 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.813690 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.813763 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814121 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814160 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814237 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814363 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814413 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814587 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814657 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814777 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814876 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814913 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814943 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814988 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.814998 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815035 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815067 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815095 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815050 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815144 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815148 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815199 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815223 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815244 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815269 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815290 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815312 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815330 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815336 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815349 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815400 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815401 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815473 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815662 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815720 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815786 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815818 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815836 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.815848 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816032 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816130 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816179 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816205 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816281 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816310 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816339 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816362 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816386 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816408 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816434 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816460 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816492 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816520 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816541 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816567 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816597 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816621 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816652 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816695 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.816748 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816811 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816843 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.816938 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:36.316910669 +0000 UTC m=+21.366072275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816864 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816992 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817007 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817020 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817041 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817056 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817071 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817089 5050 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817104 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817118 5050 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817133 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817151 5050 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817178 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817191 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817208 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817222 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817236 5050 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817250 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817250 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817267 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817308 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817348 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817374 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817377 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817396 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817411 5050 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817395 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817408 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817436 5050 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817573 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817599 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817579 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817620 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817632 5050 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817644 5050 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817655 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817674 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817712 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817727 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817726 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817760 5050 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817779 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817789 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817801 5050 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817814 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817825 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817835 5050 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817850 5050 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817862 5050 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817874 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817886 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817901 5050 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817914 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817924 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817934 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817944 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817970 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817979 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817989 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818001 5050 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818011 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818021 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818031 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818043 5050 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818053 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818062 5050 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818072 5050 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818084 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818094 5050 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818105 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818115 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818125 5050 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818135 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818148 5050 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818165 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818177 5050 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818188 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818199 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818211 5050 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818221 5050 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818230 5050 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818239 5050 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818251 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818262 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818272 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818285 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818294 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818305 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818320 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818928 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818984 5050 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.819010 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.819168 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.819186 5050 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.819215 5050 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.819229 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.819330 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.819353 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.819373 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.819387 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.819989 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820019 5050 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820030 5050 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820045 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820056 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820072 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820082 5050 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820092 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820102 5050 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820115 5050 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820125 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820137 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820147 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.820158 5050 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.816938 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817788 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.817880 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818061 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818057 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818157 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818168 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818290 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818360 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818354 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818870 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.818932 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.821726 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.821739 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.821797 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.821965 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.821850 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.822100 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.822129 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.822119 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.822317 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.822366 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.822526 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.822643 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.822688 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.822728 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.822615 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.822939 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.823144 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.824247 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.824303 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.824276 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.824906 5050 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.824931 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.825192 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.825195 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.825291 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.825439 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.825560 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.825910 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.826566 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.826884 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.826895 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.827026 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.827069 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.827196 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.827579 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.827590 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.827497 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.827765 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.827823 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:36.32780217 +0000 UTC m=+21.376963766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.827120 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.827838 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.828995 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.829622 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.829913 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.829981 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.830161 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.830228 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.830321 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.830320 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.830521 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.830632 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.830687 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.831543 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.831175 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.831224 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.831347 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.831367 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.831707 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.830442 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.831823 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.832609 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.833255 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.834014 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.834037 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.834276 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.834497 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.834749 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.835059 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.835142 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.836365 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.839255 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.839404 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.839557 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.840891 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.841880 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.841913 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.841936 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.843702 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:36.343669626 +0000 UTC m=+21.392831222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.848357 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.859781 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.859808 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.859824 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.859893 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:36.359863519 +0000 UTC m=+21.409025115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.865678 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.868166 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.868496 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.872109 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.881033 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.881543 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.882491 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.882550 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.886217 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.899665 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.901090 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.907151 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e" exitCode=255 Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.907266 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e"} Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.908934 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921117 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921164 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921223 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921234 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921244 5050 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921252 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921260 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921269 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921277 5050 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921284 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921285 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921293 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921330 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921332 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921352 5050 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921361 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921371 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921381 5050 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921389 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921399 5050 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921410 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921419 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921428 5050 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921436 5050 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921445 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921453 5050 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921461 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921470 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921479 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921490 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921498 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921507 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921515 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921523 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921533 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921541 5050 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921549 5050 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921557 5050 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921565 5050 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921573 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921581 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921589 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921597 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921605 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921612 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921620 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921628 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921636 5050 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921643 5050 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921652 5050 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921660 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921668 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921675 5050 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921683 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921690 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921698 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921705 5050 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921718 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921726 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921734 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921742 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921750 5050 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921759 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921769 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921779 5050 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921787 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921795 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921803 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921811 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921819 5050 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921828 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921836 5050 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921846 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921854 5050 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921862 5050 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921870 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921878 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921886 5050 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921894 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921902 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921910 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921919 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921931 5050 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921939 5050 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921962 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921971 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921978 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921987 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.921995 5050 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.922002 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.922010 5050 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.922018 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.922026 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.927923 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.939717 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.954472 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: E0131 05:21:35.957126 5050 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.957333 5050 scope.go:117] "RemoveContainer" containerID="6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.967089 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.967197 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-t9kbs"] Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.967453 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-t9kbs" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.968746 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.970360 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.970643 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.977329 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.985769 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.994679 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 05:21:35 crc kubenswrapper[5050]: I0131 05:21:35.995709 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.003265 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: W0131 05:21:36.005617 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-f8fd27b4e41a8c22e1fd917258fb27cc265830e16f166e7f7b1c3bbb463e5f31 WatchSource:0}: Error finding container f8fd27b4e41a8c22e1fd917258fb27cc265830e16f166e7f7b1c3bbb463e5f31: Status 404 returned error can't find the container with id f8fd27b4e41a8c22e1fd917258fb27cc265830e16f166e7f7b1c3bbb463e5f31 Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.008358 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.010719 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.017560 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.017815 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.023129 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/351a69d0-1fcc-4576-aca8-011668de66da-hosts-file\") pod \"node-resolver-t9kbs\" (UID: \"351a69d0-1fcc-4576-aca8-011668de66da\") " pod="openshift-dns/node-resolver-t9kbs" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.023178 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jhnc\" (UniqueName: \"kubernetes.io/projected/351a69d0-1fcc-4576-aca8-011668de66da-kube-api-access-4jhnc\") pod \"node-resolver-t9kbs\" (UID: \"351a69d0-1fcc-4576-aca8-011668de66da\") " pod="openshift-dns/node-resolver-t9kbs" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.023504 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: W0131 05:21:36.029716 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c645fefcae5044be9a49813ea123904f58ae0a7b8ca79b2c2e8f08461ebe80c8 WatchSource:0}: Error finding container c645fefcae5044be9a49813ea123904f58ae0a7b8ca79b2c2e8f08461ebe80c8: Status 404 returned error can't find the container with id c645fefcae5044be9a49813ea123904f58ae0a7b8ca79b2c2e8f08461ebe80c8 Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.033007 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: W0131 05:21:36.033932 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ac3bda7fd7bcbbfa313e5305cece30dff42bc6537cad857e7885d38aeb3d9ca6 WatchSource:0}: Error finding container ac3bda7fd7bcbbfa313e5305cece30dff42bc6537cad857e7885d38aeb3d9ca6: Status 404 returned error can't find the container with id ac3bda7fd7bcbbfa313e5305cece30dff42bc6537cad857e7885d38aeb3d9ca6 Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.041368 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.050872 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.067183 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.077480 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.087690 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.102202 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.108395 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.123906 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/351a69d0-1fcc-4576-aca8-011668de66da-hosts-file\") pod \"node-resolver-t9kbs\" (UID: \"351a69d0-1fcc-4576-aca8-011668de66da\") " pod="openshift-dns/node-resolver-t9kbs" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.123967 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jhnc\" (UniqueName: \"kubernetes.io/projected/351a69d0-1fcc-4576-aca8-011668de66da-kube-api-access-4jhnc\") pod \"node-resolver-t9kbs\" (UID: \"351a69d0-1fcc-4576-aca8-011668de66da\") " pod="openshift-dns/node-resolver-t9kbs" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.124202 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/351a69d0-1fcc-4576-aca8-011668de66da-hosts-file\") pod \"node-resolver-t9kbs\" (UID: \"351a69d0-1fcc-4576-aca8-011668de66da\") " pod="openshift-dns/node-resolver-t9kbs" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.126687 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.134732 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.138191 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jhnc\" (UniqueName: \"kubernetes.io/projected/351a69d0-1fcc-4576-aca8-011668de66da-kube-api-access-4jhnc\") pod \"node-resolver-t9kbs\" (UID: \"351a69d0-1fcc-4576-aca8-011668de66da\") " pod="openshift-dns/node-resolver-t9kbs" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.283519 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-t9kbs" Jan 31 05:21:36 crc kubenswrapper[5050]: W0131 05:21:36.293986 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod351a69d0_1fcc_4576_aca8_011668de66da.slice/crio-76deb9c9eda046e69d9111b6c435695bbe106c742773b57abf9d5b4b2cda9df4 WatchSource:0}: Error finding container 76deb9c9eda046e69d9111b6c435695bbe106c742773b57abf9d5b4b2cda9df4: Status 404 returned error can't find the container with id 76deb9c9eda046e69d9111b6c435695bbe106c742773b57abf9d5b4b2cda9df4 Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.324500 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-tbf62"] Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.324714 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.324832 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.324844 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:21:37.324823503 +0000 UTC m=+22.373985099 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.324867 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.325020 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.325072 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:37.325056509 +0000 UTC m=+22.374218105 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.326840 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.326901 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.327057 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.327082 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.327444 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.342596 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.352656 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.368015 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.376415 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.384470 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.390428 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.398514 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.405019 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.414587 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.425440 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.425477 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5b8394e6-1648-4ba8-970b-242434354d42-rootfs\") pod \"machine-config-daemon-tbf62\" (UID: \"5b8394e6-1648-4ba8-970b-242434354d42\") " pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.425493 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b5rj\" (UniqueName: \"kubernetes.io/projected/5b8394e6-1648-4ba8-970b-242434354d42-kube-api-access-2b5rj\") pod \"machine-config-daemon-tbf62\" (UID: \"5b8394e6-1648-4ba8-970b-242434354d42\") " pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.425516 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.425538 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.425553 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.425562 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5b8394e6-1648-4ba8-970b-242434354d42-proxy-tls\") pod \"machine-config-daemon-tbf62\" (UID: \"5b8394e6-1648-4ba8-970b-242434354d42\") " pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.425646 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:37.425609614 +0000 UTC m=+22.474771210 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.425684 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b8394e6-1648-4ba8-970b-242434354d42-mcd-auth-proxy-config\") pod \"machine-config-daemon-tbf62\" (UID: \"5b8394e6-1648-4ba8-970b-242434354d42\") " pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.425690 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.425748 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.425762 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.425802 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:37.425789188 +0000 UTC m=+22.474950784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.425724 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.425820 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.425826 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:36 crc kubenswrapper[5050]: E0131 05:21:36.425849 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:37.42584378 +0000 UTC m=+22.475005376 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.526288 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5b8394e6-1648-4ba8-970b-242434354d42-rootfs\") pod \"machine-config-daemon-tbf62\" (UID: \"5b8394e6-1648-4ba8-970b-242434354d42\") " pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.526337 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b5rj\" (UniqueName: \"kubernetes.io/projected/5b8394e6-1648-4ba8-970b-242434354d42-kube-api-access-2b5rj\") pod \"machine-config-daemon-tbf62\" (UID: \"5b8394e6-1648-4ba8-970b-242434354d42\") " pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.526392 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5b8394e6-1648-4ba8-970b-242434354d42-proxy-tls\") pod \"machine-config-daemon-tbf62\" (UID: \"5b8394e6-1648-4ba8-970b-242434354d42\") " pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.526419 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b8394e6-1648-4ba8-970b-242434354d42-mcd-auth-proxy-config\") pod \"machine-config-daemon-tbf62\" (UID: \"5b8394e6-1648-4ba8-970b-242434354d42\") " pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.526478 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5b8394e6-1648-4ba8-970b-242434354d42-rootfs\") pod \"machine-config-daemon-tbf62\" (UID: \"5b8394e6-1648-4ba8-970b-242434354d42\") " pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.527206 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b8394e6-1648-4ba8-970b-242434354d42-mcd-auth-proxy-config\") pod \"machine-config-daemon-tbf62\" (UID: \"5b8394e6-1648-4ba8-970b-242434354d42\") " pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.529934 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5b8394e6-1648-4ba8-970b-242434354d42-proxy-tls\") pod \"machine-config-daemon-tbf62\" (UID: \"5b8394e6-1648-4ba8-970b-242434354d42\") " pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.532545 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.536734 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.542168 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.545035 5050 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 05:16:35 +0000 UTC, rotation deadline is 2026-10-29 07:46:12.104032124 +0000 UTC Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.545091 5050 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6506h24m35.558943513s for next certificate rotation Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.549241 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b5rj\" (UniqueName: \"kubernetes.io/projected/5b8394e6-1648-4ba8-970b-242434354d42-kube-api-access-2b5rj\") pod \"machine-config-daemon-tbf62\" (UID: \"5b8394e6-1648-4ba8-970b-242434354d42\") " pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.552252 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.561313 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.571221 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.580487 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.580890 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.592964 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.603805 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.617540 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.628546 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.635385 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.641391 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: W0131 05:21:36.648865 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b8394e6_1648_4ba8_970b_242434354d42.slice/crio-06c8ba9f6033f8ac19a2e4c38bad5ff834eca8256de740cb66c826eb539790dc WatchSource:0}: Error finding container 06c8ba9f6033f8ac19a2e4c38bad5ff834eca8256de740cb66c826eb539790dc: Status 404 returned error can't find the container with id 06c8ba9f6033f8ac19a2e4c38bad5ff834eca8256de740cb66c826eb539790dc Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.654391 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.664552 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.680734 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:28:36.291907056 +0000 UTC Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.686286 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.691712 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-5cnpw"] Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.692472 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-tgpmd"] Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.692634 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.692691 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.695641 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.696032 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.696361 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.696425 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.696868 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.697075 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.697081 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.700971 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.726132 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.741720 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.753563 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.763366 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.780255 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.809987 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.828578 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.828613 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-os-release\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.828632 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-run-netns\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.828669 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-etc-kubernetes\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.828699 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-hostroot\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.828717 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-system-cni-dir\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.828738 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.828756 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-cnibin\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.828771 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-var-lib-cni-multus\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.828786 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-cni-binary-copy\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829065 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-cnibin\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829121 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-multus-cni-dir\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829152 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-var-lib-kubelet\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829215 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-multus-conf-dir\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829259 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-os-release\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829288 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-system-cni-dir\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829313 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-var-lib-cni-bin\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829391 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/eeb03b23-b94b-4aaf-aac2-a04db399ec55-multus-daemon-config\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829436 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjh72\" (UniqueName: \"kubernetes.io/projected/eeb03b23-b94b-4aaf-aac2-a04db399ec55-kube-api-access-kjh72\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829465 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/eeb03b23-b94b-4aaf-aac2-a04db399ec55-cni-binary-copy\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829492 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-run-k8s-cni-cncf-io\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829510 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-run-multus-certs\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829536 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll5cj\" (UniqueName: \"kubernetes.io/projected/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-kube-api-access-ll5cj\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.829563 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-multus-socket-dir-parent\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.849347 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.890784 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.911436 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc"} Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.911486 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89"} Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.911496 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"06c8ba9f6033f8ac19a2e4c38bad5ff834eca8256de740cb66c826eb539790dc"} Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.913480 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.914659 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c"} Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.915198 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.916215 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-t9kbs" event={"ID":"351a69d0-1fcc-4576-aca8-011668de66da","Type":"ContainerStarted","Data":"0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d"} Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.916266 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-t9kbs" event={"ID":"351a69d0-1fcc-4576-aca8-011668de66da","Type":"ContainerStarted","Data":"76deb9c9eda046e69d9111b6c435695bbe106c742773b57abf9d5b4b2cda9df4"} Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.917180 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ac3bda7fd7bcbbfa313e5305cece30dff42bc6537cad857e7885d38aeb3d9ca6"} Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.918513 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8"} Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.918555 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5"} Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.918567 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c645fefcae5044be9a49813ea123904f58ae0a7b8ca79b2c2e8f08461ebe80c8"} Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.919562 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2"} Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.919596 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f8fd27b4e41a8c22e1fd917258fb27cc265830e16f166e7f7b1c3bbb463e5f31"} Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.928507 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930779 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-os-release\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930814 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-var-lib-kubelet\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930834 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-multus-conf-dir\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930849 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-system-cni-dir\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930865 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-var-lib-cni-bin\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930882 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/eeb03b23-b94b-4aaf-aac2-a04db399ec55-multus-daemon-config\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930897 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjh72\" (UniqueName: \"kubernetes.io/projected/eeb03b23-b94b-4aaf-aac2-a04db399ec55-kube-api-access-kjh72\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930912 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/eeb03b23-b94b-4aaf-aac2-a04db399ec55-cni-binary-copy\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930932 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-run-k8s-cni-cncf-io\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930965 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-multus-socket-dir-parent\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930981 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-run-multus-certs\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930998 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll5cj\" (UniqueName: \"kubernetes.io/projected/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-kube-api-access-ll5cj\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931007 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-run-k8s-cni-cncf-io\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931016 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930998 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-var-lib-kubelet\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931036 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-os-release\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930971 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-var-lib-cni-bin\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.930999 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-multus-conf-dir\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931070 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-system-cni-dir\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931140 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-os-release\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931177 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-run-multus-certs\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931187 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-run-netns\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931201 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-multus-socket-dir-parent\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931331 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-run-netns\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931414 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-os-release\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931435 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-etc-kubernetes\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931407 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-etc-kubernetes\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931476 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-system-cni-dir\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931515 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-hostroot\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931535 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-cni-binary-copy\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931558 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931581 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-cnibin\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931604 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-var-lib-cni-multus\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931626 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-cnibin\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931659 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-multus-cni-dir\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931859 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/eeb03b23-b94b-4aaf-aac2-a04db399ec55-multus-daemon-config\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931891 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-multus-cni-dir\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931902 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/eeb03b23-b94b-4aaf-aac2-a04db399ec55-cni-binary-copy\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931940 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-host-var-lib-cni-multus\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931941 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-cnibin\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931991 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-cnibin\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.931998 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/eeb03b23-b94b-4aaf-aac2-a04db399ec55-hostroot\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.932023 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-system-cni-dir\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.932264 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.932294 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.932637 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-cni-binary-copy\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:36 crc kubenswrapper[5050]: I0131 05:21:36.975386 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjh72\" (UniqueName: \"kubernetes.io/projected/eeb03b23-b94b-4aaf-aac2-a04db399ec55-kube-api-access-kjh72\") pod \"multus-tgpmd\" (UID: \"eeb03b23-b94b-4aaf-aac2-a04db399ec55\") " pod="openshift-multus/multus-tgpmd" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.001568 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll5cj\" (UniqueName: \"kubernetes.io/projected/1f6f8108-9a7b-466b-8cf5-c578bd9f447a-kube-api-access-ll5cj\") pod \"multus-additional-cni-plugins-5cnpw\" (UID: \"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\") " pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.006912 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.011714 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: W0131 05:21:37.018041 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f6f8108_9a7b_466b_8cf5_c578bd9f447a.slice/crio-d608cb28804795e01d1faf3e817e72939fa0f9ca5824b516b91b3f2f4e0b903a WatchSource:0}: Error finding container d608cb28804795e01d1faf3e817e72939fa0f9ca5824b516b91b3f2f4e0b903a: Status 404 returned error can't find the container with id d608cb28804795e01d1faf3e817e72939fa0f9ca5824b516b91b3f2f4e0b903a Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.025096 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tgpmd" Jan 31 05:21:37 crc kubenswrapper[5050]: W0131 05:21:37.038593 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeeb03b23_b94b_4aaf_aac2_a04db399ec55.slice/crio-b83a346b93b06e06703e49083960069fde1f487aff76207c78d9558c837190a4 WatchSource:0}: Error finding container b83a346b93b06e06703e49083960069fde1f487aff76207c78d9558c837190a4: Status 404 returned error can't find the container with id b83a346b93b06e06703e49083960069fde1f487aff76207c78d9558c837190a4 Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.049578 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.093708 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8hx4t"] Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.094580 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.105202 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.121260 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.122673 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.146259 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.161854 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.180665 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.200302 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.230166 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236459 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-systemd\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236501 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-run-ovn-kubernetes\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236528 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-log-socket\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236549 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovn-node-metrics-cert\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236571 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-run-netns\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236594 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236616 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-node-log\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236636 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-etc-openvswitch\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236669 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-env-overrides\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236688 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-kubelet\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236707 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-slash\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236726 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-openvswitch\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236746 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovnkube-config\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236771 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-cni-bin\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236790 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwcbj\" (UniqueName: \"kubernetes.io/projected/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-kube-api-access-lwcbj\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236842 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-systemd-units\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236862 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-ovn\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236884 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-var-lib-openvswitch\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236913 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-cni-netd\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.236934 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovnkube-script-lib\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.267394 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.309571 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.337778 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.337861 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-kubelet\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.337883 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-slash\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.337900 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-openvswitch\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.337940 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-openvswitch\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.337978 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:21:39.337938094 +0000 UTC m=+24.387099690 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.337980 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-slash\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.337989 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-kubelet\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338053 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovnkube-config\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338136 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-cni-bin\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338155 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwcbj\" (UniqueName: \"kubernetes.io/projected/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-kube-api-access-lwcbj\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338201 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338231 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-systemd-units\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338250 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-ovn\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338276 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-var-lib-openvswitch\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338313 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-cni-netd\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338335 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovnkube-script-lib\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338363 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-systemd\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338381 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-run-ovn-kubernetes\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338394 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-log-socket\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338408 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovn-node-metrics-cert\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338426 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-run-netns\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338443 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338462 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-node-log\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338479 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-etc-openvswitch\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338509 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-env-overrides\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338713 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovnkube-config\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338856 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-run-ovn-kubernetes\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338875 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-systemd-units\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338912 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-cni-bin\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.338937 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-var-lib-openvswitch\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.339033 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-ovn\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.339041 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-systemd\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.339079 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.339099 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-log-socket\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.339105 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-cni-netd\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.339108 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-node-log\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.339120 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-run-netns\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.339131 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-etc-openvswitch\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.339121 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.339098 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-env-overrides\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.339213 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:39.339155004 +0000 UTC m=+24.388316640 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.339765 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovnkube-script-lib\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.349228 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.365282 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovn-node-metrics-cert\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.374676 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwcbj\" (UniqueName: \"kubernetes.io/projected/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-kube-api-access-lwcbj\") pod \"ovnkube-node-8hx4t\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.410168 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.410237 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:37 crc kubenswrapper[5050]: W0131 05:21:37.419767 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d29ecd7_304b_4356_9f7c_c4d8d4ee809e.slice/crio-5446c51fa9c4ee345a3da3236428890e4de6be73d56fc0d8300a97a00cd6a33f WatchSource:0}: Error finding container 5446c51fa9c4ee345a3da3236428890e4de6be73d56fc0d8300a97a00cd6a33f: Status 404 returned error can't find the container with id 5446c51fa9c4ee345a3da3236428890e4de6be73d56fc0d8300a97a00cd6a33f Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.439029 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.439173 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.439221 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:39.439205976 +0000 UTC m=+24.488367562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.439245 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.439280 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.439359 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.439369 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.439380 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.439383 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.439428 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.439444 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.439401 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:39.439394431 +0000 UTC m=+24.488556027 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.439533 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:39.439514294 +0000 UTC m=+24.488675910 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.454118 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.490814 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.528533 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.569711 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.611210 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.649393 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.681349 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:21:15.896391859 +0000 UTC Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.691779 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.735470 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.735493 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.735601 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.735639 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.735693 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:37 crc kubenswrapper[5050]: E0131 05:21:37.735759 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.737255 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.739578 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.740540 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.741564 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.742718 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.744528 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.745433 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.746898 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.747817 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.749297 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.750166 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.751473 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.752208 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.753242 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.753789 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.754520 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.755459 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.756177 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.757064 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.757650 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.758348 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.759277 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.759867 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.760353 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.761376 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.761813 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.762978 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.763657 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.765543 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.767457 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.767945 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.768673 5050 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.769191 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.770766 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.771376 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.772577 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.774048 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.774720 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.775663 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.776343 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.777348 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.777871 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.778863 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.779529 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.780499 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.781097 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.783304 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.786753 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.788112 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.788693 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.788845 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.789400 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.790257 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.790841 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.791872 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.797103 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.810494 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.851139 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.890035 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.924385 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237" exitCode=0 Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.924480 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237"} Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.924693 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"5446c51fa9c4ee345a3da3236428890e4de6be73d56fc0d8300a97a00cd6a33f"} Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.925827 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgpmd" event={"ID":"eeb03b23-b94b-4aaf-aac2-a04db399ec55","Type":"ContainerStarted","Data":"b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2"} Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.925859 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgpmd" event={"ID":"eeb03b23-b94b-4aaf-aac2-a04db399ec55","Type":"ContainerStarted","Data":"b83a346b93b06e06703e49083960069fde1f487aff76207c78d9558c837190a4"} Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.928584 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.929199 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f6f8108-9a7b-466b-8cf5-c578bd9f447a" containerID="f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53" exitCode=0 Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.929276 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" event={"ID":"1f6f8108-9a7b-466b-8cf5-c578bd9f447a","Type":"ContainerDied","Data":"f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53"} Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.929303 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" event={"ID":"1f6f8108-9a7b-466b-8cf5-c578bd9f447a","Type":"ContainerStarted","Data":"d608cb28804795e01d1faf3e817e72939fa0f9ca5824b516b91b3f2f4e0b903a"} Jan 31 05:21:37 crc kubenswrapper[5050]: I0131 05:21:37.970414 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:37Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.051289 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.070037 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.092291 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.142472 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.175362 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.208136 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.236980 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tcp4l"] Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.237327 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tcp4l" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.249163 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.260366 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.280916 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.300844 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.320327 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.354029 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf-host\") pod \"node-ca-tcp4l\" (UID: \"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\") " pod="openshift-image-registry/node-ca-tcp4l" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.354083 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf-serviceca\") pod \"node-ca-tcp4l\" (UID: \"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\") " pod="openshift-image-registry/node-ca-tcp4l" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.354117 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppwd4\" (UniqueName: \"kubernetes.io/projected/b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf-kube-api-access-ppwd4\") pod \"node-ca-tcp4l\" (UID: \"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\") " pod="openshift-image-registry/node-ca-tcp4l" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.371186 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.415426 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.452811 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.455406 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf-host\") pod \"node-ca-tcp4l\" (UID: \"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\") " pod="openshift-image-registry/node-ca-tcp4l" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.455465 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf-serviceca\") pod \"node-ca-tcp4l\" (UID: \"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\") " pod="openshift-image-registry/node-ca-tcp4l" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.455499 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppwd4\" (UniqueName: \"kubernetes.io/projected/b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf-kube-api-access-ppwd4\") pod \"node-ca-tcp4l\" (UID: \"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\") " pod="openshift-image-registry/node-ca-tcp4l" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.455620 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf-host\") pod \"node-ca-tcp4l\" (UID: \"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\") " pod="openshift-image-registry/node-ca-tcp4l" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.456666 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf-serviceca\") pod \"node-ca-tcp4l\" (UID: \"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\") " pod="openshift-image-registry/node-ca-tcp4l" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.512372 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppwd4\" (UniqueName: \"kubernetes.io/projected/b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf-kube-api-access-ppwd4\") pod \"node-ca-tcp4l\" (UID: \"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\") " pod="openshift-image-registry/node-ca-tcp4l" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.514334 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.547869 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.550197 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tcp4l" Jan 31 05:21:38 crc kubenswrapper[5050]: W0131 05:21:38.573532 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3a3f7cf_47c2_4989_b7b6_8b5d5d02cbdf.slice/crio-ebbfed1bb80cc04ad267d2d6b0cbbff960aef9120ffa9740b6ae7647d1a2eab1 WatchSource:0}: Error finding container ebbfed1bb80cc04ad267d2d6b0cbbff960aef9120ffa9740b6ae7647d1a2eab1: Status 404 returned error can't find the container with id ebbfed1bb80cc04ad267d2d6b0cbbff960aef9120ffa9740b6ae7647d1a2eab1 Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.593656 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.629120 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.670973 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.681888 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:02:56.610567489 +0000 UTC Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.712524 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.749533 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.793435 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.832106 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.867426 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.910296 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.934192 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f6f8108-9a7b-466b-8cf5-c578bd9f447a" containerID="9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d" exitCode=0 Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.934312 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" event={"ID":"1f6f8108-9a7b-466b-8cf5-c578bd9f447a","Type":"ContainerDied","Data":"9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d"} Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.936275 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678"} Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.940723 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3"} Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.940772 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425"} Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.940783 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0"} Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.940792 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd"} Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.940800 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16"} Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.940808 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b"} Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.942349 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tcp4l" event={"ID":"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf","Type":"ContainerStarted","Data":"0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b"} Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.942440 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tcp4l" event={"ID":"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf","Type":"ContainerStarted","Data":"ebbfed1bb80cc04ad267d2d6b0cbbff960aef9120ffa9740b6ae7647d1a2eab1"} Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.970430 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:38 crc kubenswrapper[5050]: I0131 05:21:38.992844 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:38Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.032936 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.071481 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.112066 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.156204 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.190878 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.238715 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.273739 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.311674 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.355550 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.363325 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.363448 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.363544 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:21:43.363497687 +0000 UTC m=+28.412659323 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.363826 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.363987 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:43.363925838 +0000 UTC m=+28.413087464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.393290 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.429427 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.464738 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.464798 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.464859 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.465005 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.465059 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.465068 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.465070 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.465098 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.465141 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.465084 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.465185 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:43.46515747 +0000 UTC m=+28.514319106 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.465214 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:43.465201601 +0000 UTC m=+28.514363227 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.465253 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:43.465225442 +0000 UTC m=+28.514387088 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.469221 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.517471 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.553279 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.594348 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.632447 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.683091 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:01:00.796399523 +0000 UTC Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.685680 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.712370 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.735895 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.735990 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.736022 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.736160 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.736335 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:39 crc kubenswrapper[5050]: E0131 05:21:39.736544 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.955268 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f6f8108-9a7b-466b-8cf5-c578bd9f447a" containerID="68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85" exitCode=0 Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.955370 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" event={"ID":"1f6f8108-9a7b-466b-8cf5-c578bd9f447a","Type":"ContainerDied","Data":"68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85"} Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.979029 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:39 crc kubenswrapper[5050]: I0131 05:21:39.997062 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:39Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.010316 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.034797 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.054055 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.072693 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.088943 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.100981 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.118367 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.137404 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.157342 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.199523 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.228252 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.278317 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.683667 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:22:50.367446091 +0000 UTC Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.964246 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f6f8108-9a7b-466b-8cf5-c578bd9f447a" containerID="855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084" exitCode=0 Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.964302 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" event={"ID":"1f6f8108-9a7b-466b-8cf5-c578bd9f447a","Type":"ContainerDied","Data":"855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084"} Jan 31 05:21:40 crc kubenswrapper[5050]: I0131 05:21:40.988476 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:40Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.017988 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.037373 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.054914 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.074853 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.094280 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.109834 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.125447 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.138381 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.149871 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.167742 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.186727 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.205028 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.227381 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.684035 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:42:21.872227158 +0000 UTC Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.739843 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:41 crc kubenswrapper[5050]: E0131 05:21:41.740069 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.740208 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:41 crc kubenswrapper[5050]: E0131 05:21:41.740333 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.740435 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:41 crc kubenswrapper[5050]: E0131 05:21:41.740555 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.897505 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.900943 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.901655 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.901685 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.901816 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.913554 5050 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.913869 5050 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.914998 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.915024 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.915035 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.915051 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.915061 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:41Z","lastTransitionTime":"2026-01-31T05:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:41 crc kubenswrapper[5050]: E0131 05:21:41.927661 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.931350 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.931382 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.931428 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.931464 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.931475 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:41Z","lastTransitionTime":"2026-01-31T05:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:41 crc kubenswrapper[5050]: E0131 05:21:41.945754 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.949555 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.949588 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.949600 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.949617 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.949629 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:41Z","lastTransitionTime":"2026-01-31T05:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:41 crc kubenswrapper[5050]: E0131 05:21:41.962573 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.966176 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.966199 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.966208 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.966221 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.966230 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:41Z","lastTransitionTime":"2026-01-31T05:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:41 crc kubenswrapper[5050]: E0131 05:21:41.980839 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.981478 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f6f8108-9a7b-466b-8cf5-c578bd9f447a" containerID="cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7" exitCode=0 Jan 31 05:21:41 crc kubenswrapper[5050]: I0131 05:21:41.981544 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" event={"ID":"1f6f8108-9a7b-466b-8cf5-c578bd9f447a","Type":"ContainerDied","Data":"cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7"} Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:41.997812 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:41Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.001154 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.001196 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.001205 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.001224 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.001234 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:42Z","lastTransitionTime":"2026-01-31T05:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.010251 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d"} Jan 31 05:21:42 crc kubenswrapper[5050]: E0131 05:21:42.015465 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: E0131 05:21:42.015609 5050 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.018310 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.018346 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.018358 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.018377 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.018388 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:42Z","lastTransitionTime":"2026-01-31T05:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.019557 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.031576 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.049756 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.064515 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.076024 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.086910 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.096083 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.103697 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.116369 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.121537 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.121563 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.121570 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.121584 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.121593 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:42Z","lastTransitionTime":"2026-01-31T05:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.128740 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.142436 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.153425 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.168489 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:42Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.224113 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.224150 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.224159 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.224174 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.224184 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:42Z","lastTransitionTime":"2026-01-31T05:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.327087 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.327134 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.327144 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.327160 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.327171 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:42Z","lastTransitionTime":"2026-01-31T05:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.430517 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.430577 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.430594 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.430621 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.430638 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:42Z","lastTransitionTime":"2026-01-31T05:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.533736 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.533797 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.533815 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.533843 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.533862 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:42Z","lastTransitionTime":"2026-01-31T05:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.636749 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.636806 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.636826 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.636852 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.636870 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:42Z","lastTransitionTime":"2026-01-31T05:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.684649 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:14:03.470456183 +0000 UTC Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.739874 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.739931 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.739981 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.740006 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.740027 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:42Z","lastTransitionTime":"2026-01-31T05:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.856003 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.856047 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.856061 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.856079 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.856090 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:42Z","lastTransitionTime":"2026-01-31T05:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.958627 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.958680 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.958701 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.958726 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:42 crc kubenswrapper[5050]: I0131 05:21:42.958744 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:42Z","lastTransitionTime":"2026-01-31T05:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.020104 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f6f8108-9a7b-466b-8cf5-c578bd9f447a" containerID="21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627" exitCode=0 Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.020151 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" event={"ID":"1f6f8108-9a7b-466b-8cf5-c578bd9f447a","Type":"ContainerDied","Data":"21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627"} Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.041849 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.061916 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.062014 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.062034 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.062059 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.062008 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.062076 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:43Z","lastTransitionTime":"2026-01-31T05:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.078750 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.096801 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.118178 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.137905 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.153349 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.165128 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.165160 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.165171 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.165185 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.165195 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:43Z","lastTransitionTime":"2026-01-31T05:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.171803 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.191558 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.204564 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.223826 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.240649 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.258850 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.267341 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.267394 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.267411 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.267434 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.267451 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:43Z","lastTransitionTime":"2026-01-31T05:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.284656 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.369925 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.370014 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.370032 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.370059 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.370077 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:43Z","lastTransitionTime":"2026-01-31T05:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.417071 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.417233 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.417267 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:21:51.417229916 +0000 UTC m=+36.466391522 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.417397 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.417498 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:51.417470891 +0000 UTC m=+36.466632517 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.472985 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.473032 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.473053 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.473078 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.473096 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:43Z","lastTransitionTime":"2026-01-31T05:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.518776 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.518855 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.518898 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.519040 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.519055 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.519096 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.519114 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:51.519092117 +0000 UTC m=+36.568253743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.519116 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.519127 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.519159 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.519174 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.519198 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:51.5191725 +0000 UTC m=+36.568334126 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.519224 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 05:21:51.519209731 +0000 UTC m=+36.568371417 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.575868 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.575930 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.575982 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.576011 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.576033 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:43Z","lastTransitionTime":"2026-01-31T05:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.679029 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.679090 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.679111 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.679137 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.679155 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:43Z","lastTransitionTime":"2026-01-31T05:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.685524 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:26:49.728574862 +0000 UTC Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.735388 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.735516 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.735562 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.735555 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.735728 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:43 crc kubenswrapper[5050]: E0131 05:21:43.735974 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.783270 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.783329 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.783347 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.783373 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.783394 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:43Z","lastTransitionTime":"2026-01-31T05:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.887185 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.887250 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.887261 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.887282 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.887295 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:43Z","lastTransitionTime":"2026-01-31T05:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.991487 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.991533 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.991542 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.991559 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:43 crc kubenswrapper[5050]: I0131 05:21:43.991567 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:43Z","lastTransitionTime":"2026-01-31T05:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.034570 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" event={"ID":"1f6f8108-9a7b-466b-8cf5-c578bd9f447a","Type":"ContainerStarted","Data":"745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091"} Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.041434 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f"} Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.041893 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.041923 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.057291 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.075856 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.078838 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.078939 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.094431 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.094638 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.094697 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.094713 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.094741 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.094760 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:44Z","lastTransitionTime":"2026-01-31T05:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.116127 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.138095 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.160577 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.177603 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.192847 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.198323 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.198390 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.198409 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.198436 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.198457 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:44Z","lastTransitionTime":"2026-01-31T05:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.212831 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.234181 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.255130 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.285485 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.301769 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.301825 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.301844 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.301873 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.301891 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:44Z","lastTransitionTime":"2026-01-31T05:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.305909 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.329553 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.349149 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.374298 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.400023 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.405054 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.405119 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.405144 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.405177 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.405204 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:44Z","lastTransitionTime":"2026-01-31T05:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.421997 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.446865 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.467015 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.483147 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.499359 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.507985 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.508048 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.508065 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.508091 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.508112 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:44Z","lastTransitionTime":"2026-01-31T05:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.520656 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.540793 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.559931 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.590634 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.611416 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.611739 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.611875 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.612054 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.612215 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:44Z","lastTransitionTime":"2026-01-31T05:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.612447 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.631061 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.686711 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:21:20.162493828 +0000 UTC Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.715240 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.715311 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.715329 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.715355 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.715373 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:44Z","lastTransitionTime":"2026-01-31T05:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.818272 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.818340 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.818358 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.818391 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.818413 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:44Z","lastTransitionTime":"2026-01-31T05:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.921419 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.921497 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.921528 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.921560 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:44 crc kubenswrapper[5050]: I0131 05:21:44.921577 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:44Z","lastTransitionTime":"2026-01-31T05:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.025196 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.025253 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.025269 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.025293 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.025311 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:45Z","lastTransitionTime":"2026-01-31T05:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.045423 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.128349 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.128415 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.128479 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.128506 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.128526 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:45Z","lastTransitionTime":"2026-01-31T05:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.231390 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.231472 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.231491 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.231517 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.231535 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:45Z","lastTransitionTime":"2026-01-31T05:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.334985 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.335036 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.335053 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.335075 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.335090 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:45Z","lastTransitionTime":"2026-01-31T05:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.440408 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.440504 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.440528 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.440558 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.440578 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:45Z","lastTransitionTime":"2026-01-31T05:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.504166 5050 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.543504 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.543596 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.543619 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.543650 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.543669 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:45Z","lastTransitionTime":"2026-01-31T05:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.646655 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.646709 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.646728 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.646753 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.646771 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:45Z","lastTransitionTime":"2026-01-31T05:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.687612 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:40:01.191865197 +0000 UTC Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.736102 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.736114 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.736696 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:45 crc kubenswrapper[5050]: E0131 05:21:45.736914 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:45 crc kubenswrapper[5050]: E0131 05:21:45.737080 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:45 crc kubenswrapper[5050]: E0131 05:21:45.737200 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.750356 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.750465 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.750487 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.750511 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.750529 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:45Z","lastTransitionTime":"2026-01-31T05:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.761561 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.785742 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.798854 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.814592 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.824151 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.848121 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.852414 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.852462 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.852473 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.852491 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.852502 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:45Z","lastTransitionTime":"2026-01-31T05:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.861172 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.873025 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.886173 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.897214 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.909823 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.924290 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.939835 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.954685 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.954753 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.954766 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.954786 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.954797 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:45Z","lastTransitionTime":"2026-01-31T05:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:45 crc kubenswrapper[5050]: I0131 05:21:45.969412 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.048643 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.057153 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.057192 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.057207 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.057225 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.057238 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:46Z","lastTransitionTime":"2026-01-31T05:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.160401 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.160451 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.160471 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.160499 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.160518 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:46Z","lastTransitionTime":"2026-01-31T05:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.263012 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.263081 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.263103 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.263133 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.263161 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:46Z","lastTransitionTime":"2026-01-31T05:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.366709 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.366789 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.366809 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.366837 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.366857 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:46Z","lastTransitionTime":"2026-01-31T05:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.471549 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.471622 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.471644 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.471670 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.471700 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:46Z","lastTransitionTime":"2026-01-31T05:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.574586 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.574644 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.574661 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.574723 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.574742 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:46Z","lastTransitionTime":"2026-01-31T05:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.677290 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.677326 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.677335 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.677349 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.677357 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:46Z","lastTransitionTime":"2026-01-31T05:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.688770 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 15:55:05.981880578 +0000 UTC Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.779614 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.779659 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.779673 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.779693 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.779708 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:46Z","lastTransitionTime":"2026-01-31T05:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.882722 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.882793 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.882827 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.882845 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.882857 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:46Z","lastTransitionTime":"2026-01-31T05:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.986255 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.986319 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.986340 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.986366 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:46 crc kubenswrapper[5050]: I0131 05:21:46.986384 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:46Z","lastTransitionTime":"2026-01-31T05:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.055546 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/0.log" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.059624 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f" exitCode=1 Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.059690 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f"} Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.061098 5050 scope.go:117] "RemoveContainer" containerID="9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.083834 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.089564 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.089603 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.089615 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.089634 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.089646 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:47Z","lastTransitionTime":"2026-01-31T05:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.119597 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:46Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418370 6354 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418426 6354 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:46.418439 6354 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 05:21:46.418469 6354 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:46.418483 6354 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:46.418495 6354 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:46.418508 6354 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:46.418507 6354 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418623 6354 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.419148 6354 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 05:21:46.419177 6354 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 05:21:46.419210 6354 factory.go:656] Stopping watch factory\\\\nI0131 05:21:46.419236 6354 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:46.419279 6354 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 05\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.137881 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.160577 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.183640 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.192342 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.192410 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.192434 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.192465 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.192488 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:47Z","lastTransitionTime":"2026-01-31T05:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.210430 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.238718 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.278520 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.294555 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.294594 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.294607 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.294627 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.294640 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:47Z","lastTransitionTime":"2026-01-31T05:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.298041 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.311268 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.322630 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.337872 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.350297 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.360108 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:47Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.396969 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.397004 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.397015 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.397029 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.397038 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:47Z","lastTransitionTime":"2026-01-31T05:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.500036 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.500105 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.500123 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.500150 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.500169 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:47Z","lastTransitionTime":"2026-01-31T05:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.603294 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.603356 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.603373 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.603397 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.603424 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:47Z","lastTransitionTime":"2026-01-31T05:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.689688 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:54:58.021680118 +0000 UTC Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.706125 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.706166 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.706179 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.706194 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.706209 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:47Z","lastTransitionTime":"2026-01-31T05:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.735436 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.735494 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.735494 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:47 crc kubenswrapper[5050]: E0131 05:21:47.735565 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:47 crc kubenswrapper[5050]: E0131 05:21:47.735748 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:47 crc kubenswrapper[5050]: E0131 05:21:47.735895 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.812612 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.812679 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.812697 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.812721 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.812736 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:47Z","lastTransitionTime":"2026-01-31T05:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.915830 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.915893 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.915916 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.915979 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:47 crc kubenswrapper[5050]: I0131 05:21:47.916008 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:47Z","lastTransitionTime":"2026-01-31T05:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.018914 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.019011 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.019029 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.019054 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.019071 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:48Z","lastTransitionTime":"2026-01-31T05:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.067371 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/0.log" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.071733 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da"} Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.071905 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.076564 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6"] Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.077161 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.079578 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.080086 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.096850 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.119317 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.122092 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.122147 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.122164 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.122189 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.122206 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:48Z","lastTransitionTime":"2026-01-31T05:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.136617 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.151856 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.165182 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/824e777c-379f-47d8-bc4f-c8d3b0f5ad52-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cd5w6\" (UID: \"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.165192 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.165257 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/824e777c-379f-47d8-bc4f-c8d3b0f5ad52-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cd5w6\" (UID: \"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.165392 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wfwq\" (UniqueName: \"kubernetes.io/projected/824e777c-379f-47d8-bc4f-c8d3b0f5ad52-kube-api-access-8wfwq\") pod \"ovnkube-control-plane-749d76644c-cd5w6\" (UID: \"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.165447 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/824e777c-379f-47d8-bc4f-c8d3b0f5ad52-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cd5w6\" (UID: \"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.182908 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.203250 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.211371 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.220762 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.225220 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.225282 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.225309 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.225344 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.225368 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:48Z","lastTransitionTime":"2026-01-31T05:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.236124 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.255505 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.266986 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wfwq\" (UniqueName: \"kubernetes.io/projected/824e777c-379f-47d8-bc4f-c8d3b0f5ad52-kube-api-access-8wfwq\") pod \"ovnkube-control-plane-749d76644c-cd5w6\" (UID: \"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.267068 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/824e777c-379f-47d8-bc4f-c8d3b0f5ad52-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cd5w6\" (UID: \"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.267157 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/824e777c-379f-47d8-bc4f-c8d3b0f5ad52-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cd5w6\" (UID: \"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.267205 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/824e777c-379f-47d8-bc4f-c8d3b0f5ad52-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cd5w6\" (UID: \"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.268539 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/824e777c-379f-47d8-bc4f-c8d3b0f5ad52-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cd5w6\" (UID: \"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.268641 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/824e777c-379f-47d8-bc4f-c8d3b0f5ad52-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cd5w6\" (UID: \"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.274527 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/824e777c-379f-47d8-bc4f-c8d3b0f5ad52-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cd5w6\" (UID: \"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.284844 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wfwq\" (UniqueName: \"kubernetes.io/projected/824e777c-379f-47d8-bc4f-c8d3b0f5ad52-kube-api-access-8wfwq\") pod \"ovnkube-control-plane-749d76644c-cd5w6\" (UID: \"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.290840 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:46Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418370 6354 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418426 6354 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:46.418439 6354 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 05:21:46.418469 6354 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:46.418483 6354 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:46.418495 6354 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:46.418508 6354 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:46.418507 6354 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418623 6354 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.419148 6354 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 05:21:46.419177 6354 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 05:21:46.419210 6354 factory.go:656] Stopping watch factory\\\\nI0131 05:21:46.419236 6354 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:46.419279 6354 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 05\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.314167 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.328882 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.328936 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.328991 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.329021 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.329038 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:48Z","lastTransitionTime":"2026-01-31T05:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.335405 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.356570 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.372168 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.390712 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.395005 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" Jan 31 05:21:48 crc kubenswrapper[5050]: W0131 05:21:48.414491 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod824e777c_379f_47d8_bc4f_c8d3b0f5ad52.slice/crio-9be94834dd2c0d5bdcb60ec17bab95a45b6820be4a41f80ffc9325968f0d75af WatchSource:0}: Error finding container 9be94834dd2c0d5bdcb60ec17bab95a45b6820be4a41f80ffc9325968f0d75af: Status 404 returned error can't find the container with id 9be94834dd2c0d5bdcb60ec17bab95a45b6820be4a41f80ffc9325968f0d75af Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.414898 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.432708 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.432773 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.432791 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.432821 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.432838 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:48Z","lastTransitionTime":"2026-01-31T05:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.434499 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.454279 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.472318 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.494479 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.514629 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.536015 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.536446 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.536496 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.536513 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.536537 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.536557 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:48Z","lastTransitionTime":"2026-01-31T05:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.557280 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.592382 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:46Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418370 6354 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418426 6354 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:46.418439 6354 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 05:21:46.418469 6354 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:46.418483 6354 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:46.418495 6354 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:46.418508 6354 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:46.418507 6354 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418623 6354 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.419148 6354 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 05:21:46.419177 6354 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 05:21:46.419210 6354 factory.go:656] Stopping watch factory\\\\nI0131 05:21:46.419236 6354 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:46.419279 6354 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 05\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.612744 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.640639 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.640691 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.640708 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.640734 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.640751 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:48Z","lastTransitionTime":"2026-01-31T05:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.641469 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.660784 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.684118 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:48Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.690218 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:39:50.510955133 +0000 UTC Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.744387 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.744443 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.744462 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.745072 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.745099 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:48Z","lastTransitionTime":"2026-01-31T05:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.848233 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.848278 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.848291 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.848312 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.848327 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:48Z","lastTransitionTime":"2026-01-31T05:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.951273 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.951338 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.951356 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.951381 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:48 crc kubenswrapper[5050]: I0131 05:21:48.951407 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:48Z","lastTransitionTime":"2026-01-31T05:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.054735 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.054784 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.054802 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.054822 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.054841 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:49Z","lastTransitionTime":"2026-01-31T05:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.078278 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/1.log" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.079336 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/0.log" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.082975 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da" exitCode=1 Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.083073 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da"} Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.083147 5050 scope.go:117] "RemoveContainer" containerID="9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.084197 5050 scope.go:117] "RemoveContainer" containerID="db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da" Jan 31 05:21:49 crc kubenswrapper[5050]: E0131 05:21:49.084435 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.084554 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" event={"ID":"824e777c-379f-47d8-bc4f-c8d3b0f5ad52","Type":"ContainerStarted","Data":"9be94834dd2c0d5bdcb60ec17bab95a45b6820be4a41f80ffc9325968f0d75af"} Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.103767 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.122410 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.143027 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.157989 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.158044 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.158061 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.158084 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.158101 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:49Z","lastTransitionTime":"2026-01-31T05:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.161901 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.181186 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.201364 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.219115 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.239101 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.260812 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.261764 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.262020 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.262198 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.262389 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.262589 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:49Z","lastTransitionTime":"2026-01-31T05:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.280530 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.301252 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.322689 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.354708 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:46Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418370 6354 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418426 6354 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:46.418439 6354 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 05:21:46.418469 6354 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:46.418483 6354 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:46.418495 6354 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:46.418508 6354 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:46.418507 6354 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418623 6354 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.419148 6354 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 05:21:46.419177 6354 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 05:21:46.419210 6354 factory.go:656] Stopping watch factory\\\\nI0131 05:21:46.419236 6354 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:46.419279 6354 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 05\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"ller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 05:21:48.088176 6485 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:21:48.088216 6485 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:21:48.088242 6485 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:48.088273 6485 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 05:21:48.088282 6485 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:48.088290 6485 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:48.088297 6485 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:48.088305 6485 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 05:21:48.088341 6485 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 05:21:48.088387 6485 factory.go:656] Stopping watch factory\\\\nI0131 05:21:48.088422 6485 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:48.088460 6485 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 05:21:48.088476 6485 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.366094 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.366423 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.366631 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.366834 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.367063 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:49Z","lastTransitionTime":"2026-01-31T05:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.381622 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.401572 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:49Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.469757 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.470127 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.470316 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.470465 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.470598 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:49Z","lastTransitionTime":"2026-01-31T05:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.574264 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.574340 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.574362 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.574391 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.574413 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:49Z","lastTransitionTime":"2026-01-31T05:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.677999 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.678067 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.678084 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.678109 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.678127 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:49Z","lastTransitionTime":"2026-01-31T05:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.690672 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:21:15.991763434 +0000 UTC Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.737465 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:49 crc kubenswrapper[5050]: E0131 05:21:49.737669 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.737756 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.737802 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:49 crc kubenswrapper[5050]: E0131 05:21:49.737863 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:49 crc kubenswrapper[5050]: E0131 05:21:49.738011 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.781330 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.781390 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.781406 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.781431 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.781449 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:49Z","lastTransitionTime":"2026-01-31T05:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.884748 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.884809 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.884825 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.884851 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.884869 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:49Z","lastTransitionTime":"2026-01-31T05:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.988054 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.988101 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.988118 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.988142 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:49 crc kubenswrapper[5050]: I0131 05:21:49.988159 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:49Z","lastTransitionTime":"2026-01-31T05:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.090494 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" event={"ID":"824e777c-379f-47d8-bc4f-c8d3b0f5ad52","Type":"ContainerStarted","Data":"35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135"} Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.091009 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.091159 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.091180 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.091203 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.091231 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:50Z","lastTransitionTime":"2026-01-31T05:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.194457 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.194507 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.194523 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.194547 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.194564 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:50Z","lastTransitionTime":"2026-01-31T05:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.297104 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.297150 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.297168 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.297191 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.297207 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:50Z","lastTransitionTime":"2026-01-31T05:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.377920 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-ghk5r"] Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.378828 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:50 crc kubenswrapper[5050]: E0131 05:21:50.379101 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.400926 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.401054 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.401080 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.401110 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.401132 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:50Z","lastTransitionTime":"2026-01-31T05:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.402626 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.422854 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.444910 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.464683 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.483076 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.490646 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqkjt\" (UniqueName: \"kubernetes.io/projected/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-kube-api-access-lqkjt\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.490702 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.496773 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.502913 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.502974 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.502992 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.503013 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.503028 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:50Z","lastTransitionTime":"2026-01-31T05:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.511479 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.528075 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.543284 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.560475 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.577378 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:46Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418370 6354 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418426 6354 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:46.418439 6354 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 05:21:46.418469 6354 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:46.418483 6354 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:46.418495 6354 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:46.418508 6354 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:46.418507 6354 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418623 6354 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.419148 6354 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 05:21:46.419177 6354 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 05:21:46.419210 6354 factory.go:656] Stopping watch factory\\\\nI0131 05:21:46.419236 6354 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:46.419279 6354 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 05\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"ller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 05:21:48.088176 6485 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:21:48.088216 6485 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:21:48.088242 6485 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:48.088273 6485 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 05:21:48.088282 6485 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:48.088290 6485 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:48.088297 6485 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:48.088305 6485 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 05:21:48.088341 6485 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 05:21:48.088387 6485 factory.go:656] Stopping watch factory\\\\nI0131 05:21:48.088422 6485 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:48.088460 6485 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 05:21:48.088476 6485 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.591763 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqkjt\" (UniqueName: \"kubernetes.io/projected/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-kube-api-access-lqkjt\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.591821 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:50 crc kubenswrapper[5050]: E0131 05:21:50.591932 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:21:50 crc kubenswrapper[5050]: E0131 05:21:50.592013 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs podName:e415fe7d-85f7-4a4f-8683-ffb3a0a8096d nodeName:}" failed. No retries permitted until 2026-01-31 05:21:51.091994806 +0000 UTC m=+36.141156412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs") pod "network-metrics-daemon-ghk5r" (UID: "e415fe7d-85f7-4a4f-8683-ffb3a0a8096d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.594293 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.605379 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.605441 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.605457 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.605482 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.605499 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:50Z","lastTransitionTime":"2026-01-31T05:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.607107 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.620584 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqkjt\" (UniqueName: \"kubernetes.io/projected/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-kube-api-access-lqkjt\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.621875 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.637378 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.650103 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:50Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.691413 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:20:00.937234502 +0000 UTC Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.708303 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.708351 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.708368 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.708394 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.708412 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:50Z","lastTransitionTime":"2026-01-31T05:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.819677 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.819715 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.819727 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.819745 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.819758 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:50Z","lastTransitionTime":"2026-01-31T05:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.922673 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.922729 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.922746 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.922771 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:50 crc kubenswrapper[5050]: I0131 05:21:50.922788 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:50Z","lastTransitionTime":"2026-01-31T05:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.026090 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.026156 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.026181 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.026212 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.026234 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:51Z","lastTransitionTime":"2026-01-31T05:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.097089 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.097248 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.097320 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs podName:e415fe7d-85f7-4a4f-8683-ffb3a0a8096d nodeName:}" failed. No retries permitted until 2026-01-31 05:21:52.097300529 +0000 UTC m=+37.146462135 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs") pod "network-metrics-daemon-ghk5r" (UID: "e415fe7d-85f7-4a4f-8683-ffb3a0a8096d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.097635 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/1.log" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.102941 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" event={"ID":"824e777c-379f-47d8-bc4f-c8d3b0f5ad52","Type":"ContainerStarted","Data":"2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de"} Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.126566 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.128183 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.128234 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.128253 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.128277 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.128298 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:51Z","lastTransitionTime":"2026-01-31T05:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.144559 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.162373 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.179156 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.193652 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.208792 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.225213 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.231063 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.231116 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.231132 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.231158 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.231177 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:51Z","lastTransitionTime":"2026-01-31T05:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.242583 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.258228 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.278463 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.309385 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:46Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418370 6354 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418426 6354 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:46.418439 6354 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 05:21:46.418469 6354 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:46.418483 6354 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:46.418495 6354 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:46.418508 6354 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:46.418507 6354 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418623 6354 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.419148 6354 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 05:21:46.419177 6354 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 05:21:46.419210 6354 factory.go:656] Stopping watch factory\\\\nI0131 05:21:46.419236 6354 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:46.419279 6354 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 05\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"ller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 05:21:48.088176 6485 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:21:48.088216 6485 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:21:48.088242 6485 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:48.088273 6485 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 05:21:48.088282 6485 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:48.088290 6485 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:48.088297 6485 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:48.088305 6485 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 05:21:48.088341 6485 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 05:21:48.088387 6485 factory.go:656] Stopping watch factory\\\\nI0131 05:21:48.088422 6485 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:48.088460 6485 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 05:21:48.088476 6485 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.325369 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.334712 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.334786 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.334812 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.334843 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.334867 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:51Z","lastTransitionTime":"2026-01-31T05:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.347046 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.367660 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.383751 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.400555 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:51Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.438526 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.438581 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.438596 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.438619 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.438635 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:51Z","lastTransitionTime":"2026-01-31T05:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.501650 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.501861 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:22:07.501821986 +0000 UTC m=+52.550983622 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.502002 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.502217 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.502316 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:22:07.502293267 +0000 UTC m=+52.551454893 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.541424 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.541467 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.541485 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.541508 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.541524 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:51Z","lastTransitionTime":"2026-01-31T05:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.603739 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.603838 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.603900 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.604082 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.604086 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.604140 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.604132 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.604189 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.604194 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:22:07.60416551 +0000 UTC m=+52.653327146 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.604211 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.604290 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 05:22:07.604264362 +0000 UTC m=+52.653425988 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.604160 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.604487 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 05:22:07.604452677 +0000 UTC m=+52.653614313 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.644804 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.644856 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.644873 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.644897 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.644914 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:51Z","lastTransitionTime":"2026-01-31T05:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.692488 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:43:41.167859295 +0000 UTC Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.735416 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.735494 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.735526 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.735616 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.735652 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.735747 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.735827 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:51 crc kubenswrapper[5050]: E0131 05:21:51.736020 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.749284 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.749329 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.749341 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.749359 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.749372 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:51Z","lastTransitionTime":"2026-01-31T05:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.852538 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.852592 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.852604 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.852624 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.852637 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:51Z","lastTransitionTime":"2026-01-31T05:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.955628 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.955688 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.955708 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.955733 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:51 crc kubenswrapper[5050]: I0131 05:21:51.955750 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:51Z","lastTransitionTime":"2026-01-31T05:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.056896 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.056996 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.057025 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.057056 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.057077 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: E0131 05:21:52.080696 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:52Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.085882 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.085938 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.085985 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.086011 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.086031 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: E0131 05:21:52.104993 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:52Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.110582 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.110635 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.110628 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.110651 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.110757 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.110786 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: E0131 05:21:52.110987 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:21:52 crc kubenswrapper[5050]: E0131 05:21:52.111119 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs podName:e415fe7d-85f7-4a4f-8683-ffb3a0a8096d nodeName:}" failed. No retries permitted until 2026-01-31 05:21:54.111091492 +0000 UTC m=+39.160253128 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs") pod "network-metrics-daemon-ghk5r" (UID: "e415fe7d-85f7-4a4f-8683-ffb3a0a8096d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:21:52 crc kubenswrapper[5050]: E0131 05:21:52.131423 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:52Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.136181 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.136249 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.136271 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.136302 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.136327 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: E0131 05:21:52.156160 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:52Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.160984 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.161050 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.161073 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.161101 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.161124 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: E0131 05:21:52.181909 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:52Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:52 crc kubenswrapper[5050]: E0131 05:21:52.182199 5050 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.185989 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.186220 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.186441 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.186640 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.186863 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.290172 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.290531 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.290704 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.290922 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.291139 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.393738 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.393803 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.393826 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.393850 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.393868 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.495915 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.496029 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.496048 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.496072 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.496089 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.599719 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.599784 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.599802 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.599826 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.599843 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.693635 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:40:49.180558689 +0000 UTC Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.702944 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.703049 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.703072 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.703105 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.703125 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.806778 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.806826 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.806840 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.806858 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.806871 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.909640 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.909717 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.909741 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.909769 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:52 crc kubenswrapper[5050]: I0131 05:21:52.909789 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:52Z","lastTransitionTime":"2026-01-31T05:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.013063 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.013186 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.013209 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.013238 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.013258 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:53Z","lastTransitionTime":"2026-01-31T05:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.115732 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.115786 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.115802 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.115825 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.115842 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:53Z","lastTransitionTime":"2026-01-31T05:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.218221 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.218260 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.218271 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.218288 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.218300 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:53Z","lastTransitionTime":"2026-01-31T05:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.321265 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.321324 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.321341 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.321364 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.321382 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:53Z","lastTransitionTime":"2026-01-31T05:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.425161 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.425229 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.425247 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.425274 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.425291 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:53Z","lastTransitionTime":"2026-01-31T05:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.529355 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.529447 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.529477 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.529517 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.529557 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:53Z","lastTransitionTime":"2026-01-31T05:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.632431 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.632489 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.632506 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.632530 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.632567 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:53Z","lastTransitionTime":"2026-01-31T05:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.694545 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:03:38.910122267 +0000 UTC Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.735310 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.735439 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.735636 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.735670 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.735554 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.735686 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:53 crc kubenswrapper[5050]: E0131 05:21:53.735487 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.735861 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.735890 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:53Z","lastTransitionTime":"2026-01-31T05:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:53 crc kubenswrapper[5050]: E0131 05:21:53.735919 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.736014 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:53 crc kubenswrapper[5050]: E0131 05:21:53.736186 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:53 crc kubenswrapper[5050]: E0131 05:21:53.736287 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.839930 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.840019 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.840036 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.840059 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.840077 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:53Z","lastTransitionTime":"2026-01-31T05:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.942811 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.942871 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.942888 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.942914 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:53 crc kubenswrapper[5050]: I0131 05:21:53.942930 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:53Z","lastTransitionTime":"2026-01-31T05:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.046495 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.046555 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.046574 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.046596 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.046610 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:54Z","lastTransitionTime":"2026-01-31T05:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.131551 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:54 crc kubenswrapper[5050]: E0131 05:21:54.131741 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:21:54 crc kubenswrapper[5050]: E0131 05:21:54.131866 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs podName:e415fe7d-85f7-4a4f-8683-ffb3a0a8096d nodeName:}" failed. No retries permitted until 2026-01-31 05:21:58.131832339 +0000 UTC m=+43.180994035 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs") pod "network-metrics-daemon-ghk5r" (UID: "e415fe7d-85f7-4a4f-8683-ffb3a0a8096d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.159551 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.159589 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.159598 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.159614 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.159625 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:54Z","lastTransitionTime":"2026-01-31T05:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.262462 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.262512 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.262528 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.262552 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.262569 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:54Z","lastTransitionTime":"2026-01-31T05:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.365395 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.365480 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.365503 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.365528 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.365547 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:54Z","lastTransitionTime":"2026-01-31T05:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.468522 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.468570 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.468588 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.468611 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.468628 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:54Z","lastTransitionTime":"2026-01-31T05:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.571096 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.571160 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.571181 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.571206 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.571224 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:54Z","lastTransitionTime":"2026-01-31T05:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.674371 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.674427 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.674443 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.674462 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.674476 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:54Z","lastTransitionTime":"2026-01-31T05:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.695140 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:29:36.764590255 +0000 UTC Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.777876 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.777926 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.777943 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.778006 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.778025 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:54Z","lastTransitionTime":"2026-01-31T05:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.881211 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.881291 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.881309 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.881339 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.881359 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:54Z","lastTransitionTime":"2026-01-31T05:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.984627 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.984685 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.984705 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.984729 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:54 crc kubenswrapper[5050]: I0131 05:21:54.984747 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:54Z","lastTransitionTime":"2026-01-31T05:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.087654 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.087732 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.087756 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.087791 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.087817 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:55Z","lastTransitionTime":"2026-01-31T05:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.191200 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.191259 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.191278 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.191302 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.191320 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:55Z","lastTransitionTime":"2026-01-31T05:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.294109 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.294173 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.294191 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.294217 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.294235 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:55Z","lastTransitionTime":"2026-01-31T05:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.396511 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.396559 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.396575 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.396599 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.396616 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:55Z","lastTransitionTime":"2026-01-31T05:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.499771 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.499845 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.500000 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.500032 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.500056 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:55Z","lastTransitionTime":"2026-01-31T05:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.603517 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.603614 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.603631 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.603668 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.603689 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:55Z","lastTransitionTime":"2026-01-31T05:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.696345 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:31:46.197309306 +0000 UTC Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.706720 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.706788 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.706806 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.706831 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.706851 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:55Z","lastTransitionTime":"2026-01-31T05:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.736225 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:55 crc kubenswrapper[5050]: E0131 05:21:55.736400 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.736507 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.736534 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.736522 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:55 crc kubenswrapper[5050]: E0131 05:21:55.736653 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:55 crc kubenswrapper[5050]: E0131 05:21:55.736817 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:21:55 crc kubenswrapper[5050]: E0131 05:21:55.736940 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.760562 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.786094 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.804233 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.809825 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.809873 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.809891 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.809917 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.809938 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:55Z","lastTransitionTime":"2026-01-31T05:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.827062 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.853323 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.876536 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.893249 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.908475 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.913994 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.914076 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.914092 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.914110 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.914124 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:55Z","lastTransitionTime":"2026-01-31T05:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.924365 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.936876 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.950160 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.967285 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.982337 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:55 crc kubenswrapper[5050]: I0131 05:21:55.996098 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.017262 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.017341 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.017366 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.017404 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.017430 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:56Z","lastTransitionTime":"2026-01-31T05:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.018557 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9650df79a2054a7b323994265f1dc484a7c9a1d5c0399145341ceacf1117003f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:46Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418370 6354 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418426 6354 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:46.418439 6354 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 05:21:46.418469 6354 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:46.418483 6354 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:46.418495 6354 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:46.418508 6354 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:46.418507 6354 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.418623 6354 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 05:21:46.419148 6354 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 05:21:46.419177 6354 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 05:21:46.419210 6354 factory.go:656] Stopping watch factory\\\\nI0131 05:21:46.419236 6354 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:46.419279 6354 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 05\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"ller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 05:21:48.088176 6485 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:21:48.088216 6485 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:21:48.088242 6485 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:48.088273 6485 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 05:21:48.088282 6485 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:48.088290 6485 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:48.088297 6485 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:48.088305 6485 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 05:21:48.088341 6485 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 05:21:48.088387 6485 factory.go:656] Stopping watch factory\\\\nI0131 05:21:48.088422 6485 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:48.088460 6485 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 05:21:48.088476 6485 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:56Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.031405 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:21:56Z is after 2025-08-24T17:21:41Z" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.121403 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.121494 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.121516 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.121543 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.121562 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:56Z","lastTransitionTime":"2026-01-31T05:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.228813 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.228900 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.228924 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.228993 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.229023 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:56Z","lastTransitionTime":"2026-01-31T05:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.332621 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.332679 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.332695 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.332719 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.332736 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:56Z","lastTransitionTime":"2026-01-31T05:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.436897 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.436972 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.436986 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.437007 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.437021 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:56Z","lastTransitionTime":"2026-01-31T05:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.539937 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.540038 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.540057 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.540081 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.540100 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:56Z","lastTransitionTime":"2026-01-31T05:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.642240 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.642284 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.642296 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.642313 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.642327 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:56Z","lastTransitionTime":"2026-01-31T05:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.696832 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:16:38.823053861 +0000 UTC Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.745523 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.745572 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.745584 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.745604 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.745617 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:56Z","lastTransitionTime":"2026-01-31T05:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.849150 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.849205 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.849222 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.849247 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.849264 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:56Z","lastTransitionTime":"2026-01-31T05:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.953194 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.953256 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.953272 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.953297 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:56 crc kubenswrapper[5050]: I0131 05:21:56.953316 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:56Z","lastTransitionTime":"2026-01-31T05:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.056009 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.056074 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.056091 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.056117 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.056136 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:57Z","lastTransitionTime":"2026-01-31T05:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.158783 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.158851 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.158873 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.158900 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.158918 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:57Z","lastTransitionTime":"2026-01-31T05:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.261604 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.261658 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.261670 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.261716 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.261729 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:57Z","lastTransitionTime":"2026-01-31T05:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.364554 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.364605 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.364623 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.364648 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.364666 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:57Z","lastTransitionTime":"2026-01-31T05:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.480654 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.480710 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.480726 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.480751 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.480768 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:57Z","lastTransitionTime":"2026-01-31T05:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.584036 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.584089 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.584106 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.584130 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.584147 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:57Z","lastTransitionTime":"2026-01-31T05:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.687563 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.687643 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.687669 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.687694 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.687712 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:57Z","lastTransitionTime":"2026-01-31T05:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.698001 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:43:53.225016172 +0000 UTC Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.735344 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.735408 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.735342 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.735457 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:57 crc kubenswrapper[5050]: E0131 05:21:57.735610 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:21:57 crc kubenswrapper[5050]: E0131 05:21:57.736208 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:57 crc kubenswrapper[5050]: E0131 05:21:57.736307 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:57 crc kubenswrapper[5050]: E0131 05:21:57.736518 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.790870 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.790932 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.790980 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.791012 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.791035 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:57Z","lastTransitionTime":"2026-01-31T05:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.893905 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.894038 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.894065 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.894094 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.894115 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:57Z","lastTransitionTime":"2026-01-31T05:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.997359 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.997421 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.997438 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.997462 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:57 crc kubenswrapper[5050]: I0131 05:21:57.997481 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:57Z","lastTransitionTime":"2026-01-31T05:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.101099 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.101155 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.101192 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.101226 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.101247 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:58Z","lastTransitionTime":"2026-01-31T05:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.187334 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:58 crc kubenswrapper[5050]: E0131 05:21:58.187511 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:21:58 crc kubenswrapper[5050]: E0131 05:21:58.187605 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs podName:e415fe7d-85f7-4a4f-8683-ffb3a0a8096d nodeName:}" failed. No retries permitted until 2026-01-31 05:22:06.187580786 +0000 UTC m=+51.236742422 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs") pod "network-metrics-daemon-ghk5r" (UID: "e415fe7d-85f7-4a4f-8683-ffb3a0a8096d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.204035 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.204097 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.204120 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.204150 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.204170 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:58Z","lastTransitionTime":"2026-01-31T05:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.310864 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.310994 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.311017 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.311055 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.311074 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:58Z","lastTransitionTime":"2026-01-31T05:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.414501 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.414556 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.414573 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.414596 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.414613 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:58Z","lastTransitionTime":"2026-01-31T05:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.517135 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.517191 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.517213 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.517240 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.517261 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:58Z","lastTransitionTime":"2026-01-31T05:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.620680 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.620747 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.620769 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.620798 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.620824 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:58Z","lastTransitionTime":"2026-01-31T05:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.698939 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:43:37.725965705 +0000 UTC Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.724153 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.724245 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.724276 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.724303 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.724320 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:58Z","lastTransitionTime":"2026-01-31T05:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.827476 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.827541 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.827557 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.827585 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.827609 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:58Z","lastTransitionTime":"2026-01-31T05:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.931074 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.931138 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.931155 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.931178 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:58 crc kubenswrapper[5050]: I0131 05:21:58.931196 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:58Z","lastTransitionTime":"2026-01-31T05:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.034523 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.034587 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.034604 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.034631 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.034649 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:59Z","lastTransitionTime":"2026-01-31T05:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.138007 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.138071 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.138089 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.138116 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.138135 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:59Z","lastTransitionTime":"2026-01-31T05:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.240504 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.240548 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.240559 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.240574 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.240585 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:59Z","lastTransitionTime":"2026-01-31T05:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.343939 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.344026 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.344044 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.344068 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.344086 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:59Z","lastTransitionTime":"2026-01-31T05:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.447332 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.447400 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.447419 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.447443 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.447461 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:59Z","lastTransitionTime":"2026-01-31T05:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.549756 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.549820 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.549837 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.549863 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.549881 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:59Z","lastTransitionTime":"2026-01-31T05:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.653223 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.653280 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.653297 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.653322 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.653338 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:59Z","lastTransitionTime":"2026-01-31T05:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.699337 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:52:43.129708012 +0000 UTC Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.735805 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.735831 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.735911 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:21:59 crc kubenswrapper[5050]: E0131 05:21:59.736089 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:21:59 crc kubenswrapper[5050]: E0131 05:21:59.736226 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:21:59 crc kubenswrapper[5050]: E0131 05:21:59.736319 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.736650 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:21:59 crc kubenswrapper[5050]: E0131 05:21:59.736757 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.756533 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.756864 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.757399 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.757727 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.757943 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:59Z","lastTransitionTime":"2026-01-31T05:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.862256 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.862510 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.862532 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.862562 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.862587 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:59Z","lastTransitionTime":"2026-01-31T05:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.965803 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.966316 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.966476 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.966616 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:21:59 crc kubenswrapper[5050]: I0131 05:21:59.966745 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:21:59Z","lastTransitionTime":"2026-01-31T05:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.070042 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.070382 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.070539 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.070748 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.070922 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:00Z","lastTransitionTime":"2026-01-31T05:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.174053 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.174089 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.174098 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.174113 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.174127 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:00Z","lastTransitionTime":"2026-01-31T05:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.277483 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.277558 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.277578 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.277604 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.277621 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:00Z","lastTransitionTime":"2026-01-31T05:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.380753 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.380834 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.380854 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.380880 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.380901 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:00Z","lastTransitionTime":"2026-01-31T05:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.483290 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.483346 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.483363 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.483387 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.483438 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:00Z","lastTransitionTime":"2026-01-31T05:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.586111 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.586172 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.586189 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.586212 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.586228 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:00Z","lastTransitionTime":"2026-01-31T05:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.689522 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.689578 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.689595 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.689617 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.689633 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:00Z","lastTransitionTime":"2026-01-31T05:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.699773 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:30:48.637866954 +0000 UTC Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.793221 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.793302 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.793322 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.793343 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.793359 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:00Z","lastTransitionTime":"2026-01-31T05:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.896933 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.897017 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.897036 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.897064 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:00 crc kubenswrapper[5050]: I0131 05:22:00.897082 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:00Z","lastTransitionTime":"2026-01-31T05:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.000508 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.000840 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.001018 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.001171 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.001327 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:01Z","lastTransitionTime":"2026-01-31T05:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.105252 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.105295 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.105314 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.105340 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.105358 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:01Z","lastTransitionTime":"2026-01-31T05:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.208155 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.208215 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.208233 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.208258 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.208275 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:01Z","lastTransitionTime":"2026-01-31T05:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.311543 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.311588 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.311607 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.311629 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.311648 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:01Z","lastTransitionTime":"2026-01-31T05:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.415306 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.415381 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.415408 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.415440 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.415460 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:01Z","lastTransitionTime":"2026-01-31T05:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.518457 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.518519 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.518538 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.518565 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.518582 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:01Z","lastTransitionTime":"2026-01-31T05:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.621344 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.621435 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.621466 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.621499 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.621519 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:01Z","lastTransitionTime":"2026-01-31T05:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.700911 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:38:31.229743966 +0000 UTC Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.725351 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.725436 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.725457 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.725489 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.725510 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:01Z","lastTransitionTime":"2026-01-31T05:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.737198 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.737231 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.737271 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.737840 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:01 crc kubenswrapper[5050]: E0131 05:22:01.737907 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:01 crc kubenswrapper[5050]: E0131 05:22:01.738087 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:01 crc kubenswrapper[5050]: E0131 05:22:01.738207 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:01 crc kubenswrapper[5050]: E0131 05:22:01.738235 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.828806 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.828852 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.828870 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.828893 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.828915 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:01Z","lastTransitionTime":"2026-01-31T05:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.933057 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.933132 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.933153 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.933183 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:01 crc kubenswrapper[5050]: I0131 05:22:01.933212 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:01Z","lastTransitionTime":"2026-01-31T05:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.038123 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.038873 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.039107 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.039308 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.039482 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.145380 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.145822 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.145987 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.146154 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.146309 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.250021 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.250409 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.250555 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.250696 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.250834 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.354114 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.354415 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.354614 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.354784 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.354931 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.379165 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.379229 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.379275 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.379300 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.379320 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: E0131 05:22:02.399642 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.435593 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.435639 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.435656 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.435693 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.435711 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: E0131 05:22:02.478983 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.484088 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.484234 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.484341 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.484430 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.484505 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: E0131 05:22:02.502214 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.506983 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.507021 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.507032 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.507047 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.507057 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: E0131 05:22:02.520015 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.524058 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.524091 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.524122 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.524138 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.524148 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: E0131 05:22:02.536571 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: E0131 05:22:02.536800 5050 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.538442 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.538494 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.538511 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.538532 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.538548 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.641433 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.641906 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.642094 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.642252 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.642380 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.701466 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 20:41:48.577137305 +0000 UTC Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.737182 5050 scope.go:117] "RemoveContainer" containerID="db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.746778 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.746900 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.746919 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.746940 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.747016 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.756656 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.773764 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.791347 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.813283 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.833620 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.850194 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.850255 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.850277 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.850311 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.850336 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.856667 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.879745 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.900072 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"ller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 05:21:48.088176 6485 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:21:48.088216 6485 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:21:48.088242 6485 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:48.088273 6485 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 05:21:48.088282 6485 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:48.088290 6485 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:48.088297 6485 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:48.088305 6485 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 05:21:48.088341 6485 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 05:21:48.088387 6485 factory.go:656] Stopping watch factory\\\\nI0131 05:21:48.088422 6485 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:48.088460 6485 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 05:21:48.088476 6485 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.912656 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.930934 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.949801 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.952249 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.952279 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.952288 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.952301 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.952310 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:02Z","lastTransitionTime":"2026-01-31T05:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.965671 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.981258 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:02 crc kubenswrapper[5050]: I0131 05:22:02.992388 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.000672 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:02Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.010811 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.054434 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.054472 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.054484 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.054500 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.054513 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:03Z","lastTransitionTime":"2026-01-31T05:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.154613 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/1.log" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.157283 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.157349 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.157366 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.157391 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.157408 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:03Z","lastTransitionTime":"2026-01-31T05:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.160993 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358"} Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.161157 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.161875 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.171912 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.178733 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.194363 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.218064 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.236410 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.259718 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.259767 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.259784 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.259807 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.259823 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:03Z","lastTransitionTime":"2026-01-31T05:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.272053 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.293440 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.303944 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.319049 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.332058 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.344895 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.356518 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.361788 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.361848 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.361871 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.361900 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.361920 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:03Z","lastTransitionTime":"2026-01-31T05:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.373992 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.391303 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"ller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 05:21:48.088176 6485 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:21:48.088216 6485 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:21:48.088242 6485 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:48.088273 6485 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 05:21:48.088282 6485 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:48.088290 6485 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:48.088297 6485 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:48.088305 6485 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 05:21:48.088341 6485 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 05:21:48.088387 6485 factory.go:656] Stopping watch factory\\\\nI0131 05:21:48.088422 6485 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:48.088460 6485 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 05:21:48.088476 6485 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:22:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.403539 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.417229 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.432223 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.443388 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.457629 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.464464 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.464498 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.464508 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.464522 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.464531 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:03Z","lastTransitionTime":"2026-01-31T05:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.471750 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.490262 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.510662 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.525016 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.541569 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.555533 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.566739 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.566758 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.566765 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.566778 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.566788 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:03Z","lastTransitionTime":"2026-01-31T05:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.571581 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.589377 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.604633 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.625102 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"ller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 05:21:48.088176 6485 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:21:48.088216 6485 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:21:48.088242 6485 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:48.088273 6485 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 05:21:48.088282 6485 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:48.088290 6485 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:48.088297 6485 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:48.088305 6485 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 05:21:48.088341 6485 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 05:21:48.088387 6485 factory.go:656] Stopping watch factory\\\\nI0131 05:21:48.088422 6485 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:48.088460 6485 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 05:21:48.088476 6485 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:22:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.643057 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.661102 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.669083 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.669323 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.669435 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.669528 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.669617 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:03Z","lastTransitionTime":"2026-01-31T05:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.679315 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.698162 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.702057 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:32:40.727845228 +0000 UTC Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.716013 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:03Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.736119 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.736167 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.736245 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:03 crc kubenswrapper[5050]: E0131 05:22:03.736442 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.736466 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:03 crc kubenswrapper[5050]: E0131 05:22:03.736607 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:03 crc kubenswrapper[5050]: E0131 05:22:03.736835 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:03 crc kubenswrapper[5050]: E0131 05:22:03.736983 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.770421 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.772595 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.772622 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.772632 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.772644 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.772653 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:03Z","lastTransitionTime":"2026-01-31T05:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.874618 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.874665 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.874681 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.874707 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.874725 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:03Z","lastTransitionTime":"2026-01-31T05:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.976993 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.977054 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.977076 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.977105 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:03 crc kubenswrapper[5050]: I0131 05:22:03.977130 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:03Z","lastTransitionTime":"2026-01-31T05:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.079679 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.079717 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.079728 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.079743 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.079755 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:04Z","lastTransitionTime":"2026-01-31T05:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.165420 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/2.log" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.166437 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/1.log" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.169492 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358" exitCode=1 Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.169741 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358"} Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.169987 5050 scope.go:117] "RemoveContainer" containerID="db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.181275 5050 scope.go:117] "RemoveContainer" containerID="aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358" Jan 31 05:22:04 crc kubenswrapper[5050]: E0131 05:22:04.181525 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.181812 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.181837 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.181848 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.181865 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.181876 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:04Z","lastTransitionTime":"2026-01-31T05:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.191121 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.220599 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db1927a7c29a85b16dd5e49b6ea1ab35a826a5129c74408e513fcac93002f1da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"message\\\":\\\"ller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 05:21:48.088176 6485 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:21:48.088216 6485 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:21:48.088242 6485 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 05:21:48.088267 6485 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:21:48.088273 6485 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 05:21:48.088282 6485 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:21:48.088290 6485 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 05:21:48.088297 6485 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 05:21:48.088305 6485 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 05:21:48.088341 6485 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 05:21:48.088387 6485 factory.go:656] Stopping watch factory\\\\nI0131 05:21:48.088422 6485 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:21:48.088460 6485 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 05:21:48.088476 6485 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 05:21:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:03.700536 6700 obj_retry.go:551] Creating *factory.egressNode crc took: 2.973663ms\\\\nI0131 05:22:03.700573 6700 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 05:22:03.700616 6700 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 05:22:03.700629 6700 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:22:03.700645 6700 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:22:03.700670 6700 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:22:03.700719 6700 factory.go:656] Stopping watch factory\\\\nI0131 05:22:03.700751 6700 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:22:03.700943 6700 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 05:22:03.701067 6700 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 05:22:03.701111 6700 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:03.701145 6700 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:03.701227 6700 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.231014 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.242249 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.254321 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.266836 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.284909 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.285183 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.285263 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.285338 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.285396 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:04Z","lastTransitionTime":"2026-01-31T05:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.291082 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.302847 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.318869 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.335394 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.347674 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.360530 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.372057 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.384264 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.392217 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.392814 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.392895 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.393266 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.393352 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:04Z","lastTransitionTime":"2026-01-31T05:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.403709 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.422863 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.441074 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:04Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.496403 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.496452 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.496464 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.496483 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.496500 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:04Z","lastTransitionTime":"2026-01-31T05:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.599641 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.599705 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.599721 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.599742 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.599756 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:04Z","lastTransitionTime":"2026-01-31T05:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.702486 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:25:31.468062919 +0000 UTC Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.703320 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.703363 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.703375 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.703392 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.703402 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:04Z","lastTransitionTime":"2026-01-31T05:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.805569 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.805636 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.805652 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.805679 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.805696 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:04Z","lastTransitionTime":"2026-01-31T05:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.908416 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.908753 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.908846 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.908977 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:04 crc kubenswrapper[5050]: I0131 05:22:04.909090 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:04Z","lastTransitionTime":"2026-01-31T05:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.011998 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.012048 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.012069 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.012093 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.012111 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:05Z","lastTransitionTime":"2026-01-31T05:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.114886 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.115056 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.115157 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.115259 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.115341 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:05Z","lastTransitionTime":"2026-01-31T05:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.173516 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/2.log" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.177703 5050 scope.go:117] "RemoveContainer" containerID="aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358" Jan 31 05:22:05 crc kubenswrapper[5050]: E0131 05:22:05.177880 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.195675 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.218146 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.218200 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.218217 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.218247 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.218265 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:05Z","lastTransitionTime":"2026-01-31T05:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.218426 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.237367 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.252815 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.265866 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.277642 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.292687 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.308166 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.320601 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.320664 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.320684 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.320745 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.320768 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:05Z","lastTransitionTime":"2026-01-31T05:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.328658 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.347982 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.367769 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.383499 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.403930 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:03.700536 6700 obj_retry.go:551] Creating *factory.egressNode crc took: 2.973663ms\\\\nI0131 05:22:03.700573 6700 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 05:22:03.700616 6700 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 05:22:03.700629 6700 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:22:03.700645 6700 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:22:03.700670 6700 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:22:03.700719 6700 factory.go:656] Stopping watch factory\\\\nI0131 05:22:03.700751 6700 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:22:03.700943 6700 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 05:22:03.701067 6700 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 05:22:03.701111 6700 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:03.701145 6700 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:03.701227 6700 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.418516 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.423141 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.423303 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.423395 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.423483 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.423573 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:05Z","lastTransitionTime":"2026-01-31T05:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.435315 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.451379 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.468357 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.527320 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.527386 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.527403 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.527427 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.527444 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:05Z","lastTransitionTime":"2026-01-31T05:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.630353 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.630434 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.630453 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.630480 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.630497 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:05Z","lastTransitionTime":"2026-01-31T05:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.703481 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:43:04.738434661 +0000 UTC Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.732993 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.733052 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.733074 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.733099 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.733116 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:05Z","lastTransitionTime":"2026-01-31T05:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.735637 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.735663 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.735792 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:05 crc kubenswrapper[5050]: E0131 05:22:05.735796 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.735836 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:05 crc kubenswrapper[5050]: E0131 05:22:05.736010 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:05 crc kubenswrapper[5050]: E0131 05:22:05.736271 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:05 crc kubenswrapper[5050]: E0131 05:22:05.736385 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.762593 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.786325 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.805694 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.826919 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:03.700536 6700 obj_retry.go:551] Creating *factory.egressNode crc took: 2.973663ms\\\\nI0131 05:22:03.700573 6700 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 05:22:03.700616 6700 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 05:22:03.700629 6700 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:22:03.700645 6700 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:22:03.700670 6700 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:22:03.700719 6700 factory.go:656] Stopping watch factory\\\\nI0131 05:22:03.700751 6700 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:22:03.700943 6700 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 05:22:03.701067 6700 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 05:22:03.701111 6700 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:03.701145 6700 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:03.701227 6700 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.835151 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.835184 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.835192 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.835206 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.835215 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:05Z","lastTransitionTime":"2026-01-31T05:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.843537 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.863425 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.888471 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.906804 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.925888 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.938087 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.938152 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.938171 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.938197 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.938215 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:05Z","lastTransitionTime":"2026-01-31T05:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.944449 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.959822 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.975421 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:05 crc kubenswrapper[5050]: I0131 05:22:05.995053 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:05Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.017902 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:06Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.037139 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:06Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.041037 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.041094 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.041111 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.041137 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.041154 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:06Z","lastTransitionTime":"2026-01-31T05:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.055807 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:06Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.070292 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:06Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.143492 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.143585 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.143604 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.143631 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.143648 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:06Z","lastTransitionTime":"2026-01-31T05:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.246408 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.246451 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.246467 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.246489 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.246506 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:06Z","lastTransitionTime":"2026-01-31T05:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.274515 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:06 crc kubenswrapper[5050]: E0131 05:22:06.274763 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:22:06 crc kubenswrapper[5050]: E0131 05:22:06.274859 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs podName:e415fe7d-85f7-4a4f-8683-ffb3a0a8096d nodeName:}" failed. No retries permitted until 2026-01-31 05:22:22.274834021 +0000 UTC m=+67.323995657 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs") pod "network-metrics-daemon-ghk5r" (UID: "e415fe7d-85f7-4a4f-8683-ffb3a0a8096d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.350026 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.350313 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.350460 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.350628 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.350751 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:06Z","lastTransitionTime":"2026-01-31T05:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.453659 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.453711 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.453723 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.453742 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.453756 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:06Z","lastTransitionTime":"2026-01-31T05:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.556902 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.556974 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.556996 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.557020 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.557036 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:06Z","lastTransitionTime":"2026-01-31T05:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.660726 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.660793 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.660811 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.660839 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.660858 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:06Z","lastTransitionTime":"2026-01-31T05:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.704642 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 10:30:36.300593506 +0000 UTC Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.763700 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.763775 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.763801 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.763836 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.763859 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:06Z","lastTransitionTime":"2026-01-31T05:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.871451 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.871913 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.872211 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.874301 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.874555 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:06Z","lastTransitionTime":"2026-01-31T05:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.980493 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.980726 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.980864 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.980970 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:06 crc kubenswrapper[5050]: I0131 05:22:06.981082 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:06Z","lastTransitionTime":"2026-01-31T05:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.084014 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.084249 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.084309 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.084392 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.084447 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:07Z","lastTransitionTime":"2026-01-31T05:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.186794 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.187205 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.187218 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.187238 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.187251 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:07Z","lastTransitionTime":"2026-01-31T05:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.290162 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.290211 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.290228 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.290251 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.290269 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:07Z","lastTransitionTime":"2026-01-31T05:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.393491 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.393563 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.393582 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.393609 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.393628 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:07Z","lastTransitionTime":"2026-01-31T05:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.497989 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.498049 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.498068 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.498097 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.498116 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:07Z","lastTransitionTime":"2026-01-31T05:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.587034 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.587297 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:22:39.587258289 +0000 UTC m=+84.636419925 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.587357 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.587581 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.587679 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:22:39.58764971 +0000 UTC m=+84.636811336 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.601505 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.601603 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.601623 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.601647 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.601705 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:07Z","lastTransitionTime":"2026-01-31T05:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.688782 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.688880 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.688930 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.689136 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.689162 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.689182 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.689222 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.689293 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.689242 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 05:22:39.689221884 +0000 UTC m=+84.738383510 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.689344 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.689371 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.689373 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:22:39.689342457 +0000 UTC m=+84.738504093 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.689449 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 05:22:39.689423459 +0000 UTC m=+84.738585095 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.704813 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:55:35.962934917 +0000 UTC Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.704861 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.704926 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.704989 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.705017 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.705036 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:07Z","lastTransitionTime":"2026-01-31T05:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.735722 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.735784 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.735836 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.735889 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.736052 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.736198 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.735854 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:07 crc kubenswrapper[5050]: E0131 05:22:07.736327 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.807432 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.807497 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.807566 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.808100 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.808160 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:07Z","lastTransitionTime":"2026-01-31T05:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.911659 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.911712 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.911733 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.911762 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:07 crc kubenswrapper[5050]: I0131 05:22:07.911780 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:07Z","lastTransitionTime":"2026-01-31T05:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.014066 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.014102 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.014112 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.014128 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.014140 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:08Z","lastTransitionTime":"2026-01-31T05:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.116403 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.116455 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.116467 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.116484 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.116497 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:08Z","lastTransitionTime":"2026-01-31T05:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.220095 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.220146 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.220169 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.220197 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.220218 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:08Z","lastTransitionTime":"2026-01-31T05:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.323735 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.323781 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.323797 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.323820 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.323836 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:08Z","lastTransitionTime":"2026-01-31T05:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.427109 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.427199 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.427217 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.427241 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.427258 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:08Z","lastTransitionTime":"2026-01-31T05:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.530896 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.531022 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.531046 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.531072 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.531089 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:08Z","lastTransitionTime":"2026-01-31T05:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.634214 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.634276 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.634297 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.634323 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.634342 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:08Z","lastTransitionTime":"2026-01-31T05:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.706009 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:45:37.163421783 +0000 UTC Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.737502 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.737536 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.737546 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.737560 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.737572 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:08Z","lastTransitionTime":"2026-01-31T05:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.840286 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.840367 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.840385 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.840412 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.840431 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:08Z","lastTransitionTime":"2026-01-31T05:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.943457 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.943502 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.943513 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.943532 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:08 crc kubenswrapper[5050]: I0131 05:22:08.943545 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:08Z","lastTransitionTime":"2026-01-31T05:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.046746 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.046785 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.046797 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.046817 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.046830 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:09Z","lastTransitionTime":"2026-01-31T05:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.151216 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.151361 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.151388 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.151415 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.151486 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:09Z","lastTransitionTime":"2026-01-31T05:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.254277 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.254335 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.254354 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.254376 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.254393 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:09Z","lastTransitionTime":"2026-01-31T05:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.357404 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.357485 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.357501 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.357525 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.357536 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:09Z","lastTransitionTime":"2026-01-31T05:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.460009 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.460054 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.460069 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.460089 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.460105 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:09Z","lastTransitionTime":"2026-01-31T05:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.563164 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.563226 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.563248 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.563272 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.563289 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:09Z","lastTransitionTime":"2026-01-31T05:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.666419 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.667156 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.667200 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.667231 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.667254 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:09Z","lastTransitionTime":"2026-01-31T05:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.707162 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:45:21.978676265 +0000 UTC Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.736375 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.736435 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.736500 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:09 crc kubenswrapper[5050]: E0131 05:22:09.736498 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.736434 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:09 crc kubenswrapper[5050]: E0131 05:22:09.736588 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:09 crc kubenswrapper[5050]: E0131 05:22:09.736716 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:09 crc kubenswrapper[5050]: E0131 05:22:09.736847 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.769945 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.770032 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.770049 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.770075 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.770092 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:09Z","lastTransitionTime":"2026-01-31T05:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.872833 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.872891 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.872908 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.872933 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.872981 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:09Z","lastTransitionTime":"2026-01-31T05:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.976300 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.976593 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.976655 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.976751 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:09 crc kubenswrapper[5050]: I0131 05:22:09.976821 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:09Z","lastTransitionTime":"2026-01-31T05:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.079507 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.079569 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.079614 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.079641 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.079662 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:10Z","lastTransitionTime":"2026-01-31T05:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.182574 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.182625 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.182638 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.182657 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.182671 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:10Z","lastTransitionTime":"2026-01-31T05:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.285745 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.285800 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.285817 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.285842 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.285860 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:10Z","lastTransitionTime":"2026-01-31T05:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.388460 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.388525 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.388547 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.388578 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.388599 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:10Z","lastTransitionTime":"2026-01-31T05:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.491588 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.491663 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.491678 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.491699 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.491713 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:10Z","lastTransitionTime":"2026-01-31T05:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.594461 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.594527 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.594548 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.594577 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.594598 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:10Z","lastTransitionTime":"2026-01-31T05:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.698445 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.698504 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.698524 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.698554 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.698573 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:10Z","lastTransitionTime":"2026-01-31T05:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.708081 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 11:24:23.850645443 +0000 UTC Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.801801 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.801859 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.801875 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.801898 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.801916 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:10Z","lastTransitionTime":"2026-01-31T05:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.904426 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.904483 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.904500 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.904524 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:10 crc kubenswrapper[5050]: I0131 05:22:10.904541 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:10Z","lastTransitionTime":"2026-01-31T05:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.007502 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.007582 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.007611 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.007642 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.007659 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:11Z","lastTransitionTime":"2026-01-31T05:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.110507 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.110603 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.110631 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.110659 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.110686 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:11Z","lastTransitionTime":"2026-01-31T05:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.213286 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.213653 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.213820 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.213980 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.214116 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:11Z","lastTransitionTime":"2026-01-31T05:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.316721 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.316834 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.316861 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.316893 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.316915 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:11Z","lastTransitionTime":"2026-01-31T05:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.420487 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.420548 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.420565 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.420589 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.420607 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:11Z","lastTransitionTime":"2026-01-31T05:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.524154 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.524212 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.524229 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.524254 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.524271 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:11Z","lastTransitionTime":"2026-01-31T05:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.626838 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.626926 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.626945 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.627011 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.627031 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:11Z","lastTransitionTime":"2026-01-31T05:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.708928 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:31:58.964920444 +0000 UTC Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.730247 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.730296 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.730313 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.730336 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.730353 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:11Z","lastTransitionTime":"2026-01-31T05:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.735904 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.735930 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.736033 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:11 crc kubenswrapper[5050]: E0131 05:22:11.736115 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.736131 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:11 crc kubenswrapper[5050]: E0131 05:22:11.736246 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:11 crc kubenswrapper[5050]: E0131 05:22:11.736339 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:11 crc kubenswrapper[5050]: E0131 05:22:11.736464 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.833222 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.833266 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.833287 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.833317 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.833339 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:11Z","lastTransitionTime":"2026-01-31T05:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.936262 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.936313 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.936329 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.936351 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:11 crc kubenswrapper[5050]: I0131 05:22:11.936369 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:11Z","lastTransitionTime":"2026-01-31T05:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.040096 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.040144 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.040163 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.040185 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.040202 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.142568 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.142687 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.142740 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.142763 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.142780 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.246150 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.246494 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.246796 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.247044 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.247237 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.350989 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.351606 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.351772 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.352134 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.352451 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.456043 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.456097 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.456116 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.456140 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.456157 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.559388 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.559441 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.559460 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.559486 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.559504 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.662118 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.662177 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.662196 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.662226 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.662246 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.710092 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:30:51.172919771 +0000 UTC Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.762889 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.762946 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.762992 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.763018 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.763037 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: E0131 05:22:12.784678 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.791183 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.791249 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.791268 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.791295 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.791314 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: E0131 05:22:12.811178 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.817138 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.817408 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.817611 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.817816 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.818039 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: E0131 05:22:12.839817 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.845564 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.845628 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.845647 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.845673 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.845692 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: E0131 05:22:12.864872 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.871705 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.871794 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.871821 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.871856 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.871893 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:12 crc kubenswrapper[5050]: E0131 05:22:12.896515 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:12Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:12 crc kubenswrapper[5050]: E0131 05:22:12.896747 5050 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.900183 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.900263 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.900281 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.900334 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:12 crc kubenswrapper[5050]: I0131 05:22:12.900352 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:12Z","lastTransitionTime":"2026-01-31T05:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.003717 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.003793 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.003809 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.003831 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.003844 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:13Z","lastTransitionTime":"2026-01-31T05:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.107646 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.107705 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.107722 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.107748 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.107766 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:13Z","lastTransitionTime":"2026-01-31T05:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.211130 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.211182 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.211200 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.211227 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.211245 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:13Z","lastTransitionTime":"2026-01-31T05:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.314304 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.314682 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.314817 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.314999 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.315139 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:13Z","lastTransitionTime":"2026-01-31T05:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.418714 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.418755 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.418768 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.418787 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.418801 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:13Z","lastTransitionTime":"2026-01-31T05:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.522295 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.522750 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.523024 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.523193 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.523337 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:13Z","lastTransitionTime":"2026-01-31T05:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.626440 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.626495 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.626516 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.626541 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.626561 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:13Z","lastTransitionTime":"2026-01-31T05:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.710671 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:42:37.522647251 +0000 UTC Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.728885 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.728998 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.729016 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.729040 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.729060 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:13Z","lastTransitionTime":"2026-01-31T05:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.736369 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.736414 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:13 crc kubenswrapper[5050]: E0131 05:22:13.736546 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.736661 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:13 crc kubenswrapper[5050]: E0131 05:22:13.736892 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:13 crc kubenswrapper[5050]: E0131 05:22:13.737113 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.737175 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:13 crc kubenswrapper[5050]: E0131 05:22:13.737425 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.831586 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.831670 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.831696 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.831727 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.831751 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:13Z","lastTransitionTime":"2026-01-31T05:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.935449 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.935516 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.935534 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.935559 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:13 crc kubenswrapper[5050]: I0131 05:22:13.935579 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:13Z","lastTransitionTime":"2026-01-31T05:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.039143 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.039210 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.039230 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.039257 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.039276 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:14Z","lastTransitionTime":"2026-01-31T05:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.142098 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.142129 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.142140 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.142154 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.142163 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:14Z","lastTransitionTime":"2026-01-31T05:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.244188 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.244256 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.244279 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.244315 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.244342 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:14Z","lastTransitionTime":"2026-01-31T05:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.347674 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.347733 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.347752 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.347780 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.347798 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:14Z","lastTransitionTime":"2026-01-31T05:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.450608 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.450692 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.450722 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.450750 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.450772 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:14Z","lastTransitionTime":"2026-01-31T05:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.554518 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.554577 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.554596 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.554620 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.554638 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:14Z","lastTransitionTime":"2026-01-31T05:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.657440 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.657499 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.657517 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.657541 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.657559 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:14Z","lastTransitionTime":"2026-01-31T05:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.711514 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:51:09.837956286 +0000 UTC Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.760779 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.760833 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.760852 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.760876 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.760894 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:14Z","lastTransitionTime":"2026-01-31T05:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.864576 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.864619 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.864636 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.864657 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.864676 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:14Z","lastTransitionTime":"2026-01-31T05:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.967942 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.968032 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.968050 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.968075 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:14 crc kubenswrapper[5050]: I0131 05:22:14.968092 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:14Z","lastTransitionTime":"2026-01-31T05:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.070529 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.070763 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.070907 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.071086 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.071239 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:15Z","lastTransitionTime":"2026-01-31T05:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.175224 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.175364 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.175391 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.175457 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.175488 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:15Z","lastTransitionTime":"2026-01-31T05:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.279052 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.279089 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.279099 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.279114 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.279125 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:15Z","lastTransitionTime":"2026-01-31T05:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.382607 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.382692 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.382715 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.382753 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.382777 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:15Z","lastTransitionTime":"2026-01-31T05:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.486261 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.486338 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.486366 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.486400 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.486423 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:15Z","lastTransitionTime":"2026-01-31T05:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.588673 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.588744 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.588761 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.588785 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.588802 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:15Z","lastTransitionTime":"2026-01-31T05:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.691303 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.691347 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.691356 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.691374 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.691385 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:15Z","lastTransitionTime":"2026-01-31T05:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.712003 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:47:28.741787289 +0000 UTC Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.735810 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.735931 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:15 crc kubenswrapper[5050]: E0131 05:22:15.736110 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.736163 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.736177 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:15 crc kubenswrapper[5050]: E0131 05:22:15.736365 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:15 crc kubenswrapper[5050]: E0131 05:22:15.736481 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:15 crc kubenswrapper[5050]: E0131 05:22:15.736741 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.758883 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.784284 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.794671 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.794722 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.794736 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.794757 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.794773 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:15Z","lastTransitionTime":"2026-01-31T05:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.804279 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.822461 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.839662 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.856513 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.873539 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.889560 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.898819 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.899133 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.899335 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.899527 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.899709 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:15Z","lastTransitionTime":"2026-01-31T05:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.912363 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.932211 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.950584 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:15 crc kubenswrapper[5050]: I0131 05:22:15.970793 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:15Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.003464 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.003522 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.003541 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.003566 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.003582 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:16Z","lastTransitionTime":"2026-01-31T05:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.004737 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:03.700536 6700 obj_retry.go:551] Creating *factory.egressNode crc took: 2.973663ms\\\\nI0131 05:22:03.700573 6700 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 05:22:03.700616 6700 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 05:22:03.700629 6700 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:22:03.700645 6700 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:22:03.700670 6700 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:22:03.700719 6700 factory.go:656] Stopping watch factory\\\\nI0131 05:22:03.700751 6700 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:22:03.700943 6700 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 05:22:03.701067 6700 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 05:22:03.701111 6700 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:03.701145 6700 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:03.701227 6700 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:16Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.024358 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:16Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.048261 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:16Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.069425 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:16Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.091862 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:16Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.106315 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.106378 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.106397 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.106421 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.106442 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:16Z","lastTransitionTime":"2026-01-31T05:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.209358 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.209438 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.209460 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.209492 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.209515 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:16Z","lastTransitionTime":"2026-01-31T05:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.312842 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.312917 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.312936 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.312990 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.313012 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:16Z","lastTransitionTime":"2026-01-31T05:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.416756 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.416817 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.416837 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.416865 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.416884 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:16Z","lastTransitionTime":"2026-01-31T05:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.519654 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.519723 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.519742 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.519770 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.519789 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:16Z","lastTransitionTime":"2026-01-31T05:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.624206 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.624291 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.624317 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.624352 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.624673 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:16Z","lastTransitionTime":"2026-01-31T05:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.712458 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 10:06:41.61656516 +0000 UTC Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.727733 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.727818 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.727848 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.727880 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.727905 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:16Z","lastTransitionTime":"2026-01-31T05:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.831123 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.831185 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.831202 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.831227 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.831246 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:16Z","lastTransitionTime":"2026-01-31T05:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.934285 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.934342 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.934359 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.934385 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:16 crc kubenswrapper[5050]: I0131 05:22:16.934403 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:16Z","lastTransitionTime":"2026-01-31T05:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.037444 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.037495 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.037514 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.037538 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.037554 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:17Z","lastTransitionTime":"2026-01-31T05:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.140840 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.140908 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.140931 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.140993 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.141020 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:17Z","lastTransitionTime":"2026-01-31T05:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.243324 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.243379 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.243396 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.243419 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.243438 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:17Z","lastTransitionTime":"2026-01-31T05:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.346723 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.346779 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.346795 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.346817 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.346835 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:17Z","lastTransitionTime":"2026-01-31T05:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.450662 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.451126 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.451152 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.451180 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.451198 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:17Z","lastTransitionTime":"2026-01-31T05:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.554446 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.554536 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.554554 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.554832 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.554862 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:17Z","lastTransitionTime":"2026-01-31T05:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.657932 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.658008 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.658025 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.658070 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.658089 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:17Z","lastTransitionTime":"2026-01-31T05:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.713701 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:45:35.180002869 +0000 UTC Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.736319 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:17 crc kubenswrapper[5050]: E0131 05:22:17.736543 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.736996 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:17 crc kubenswrapper[5050]: E0131 05:22:17.737163 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.737412 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:17 crc kubenswrapper[5050]: E0131 05:22:17.737528 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.737986 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:17 crc kubenswrapper[5050]: E0131 05:22:17.738105 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.760924 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.761031 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.761051 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.761078 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.761104 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:17Z","lastTransitionTime":"2026-01-31T05:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.864505 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.864576 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.864602 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.864631 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.864654 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:17Z","lastTransitionTime":"2026-01-31T05:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.966426 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.966474 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.966489 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.966509 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:17 crc kubenswrapper[5050]: I0131 05:22:17.966525 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:17Z","lastTransitionTime":"2026-01-31T05:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.069832 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.069886 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.069902 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.069927 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.069946 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:18Z","lastTransitionTime":"2026-01-31T05:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.174543 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.174625 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.174645 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.174674 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.174691 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:18Z","lastTransitionTime":"2026-01-31T05:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.277281 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.277341 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.277358 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.277389 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.277407 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:18Z","lastTransitionTime":"2026-01-31T05:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.380528 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.380575 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.380591 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.380614 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.380631 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:18Z","lastTransitionTime":"2026-01-31T05:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.482789 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.482839 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.482862 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.482886 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.482902 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:18Z","lastTransitionTime":"2026-01-31T05:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.585804 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.585876 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.585899 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.585925 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.585942 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:18Z","lastTransitionTime":"2026-01-31T05:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.688523 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.688564 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.688580 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.688600 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.688618 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:18Z","lastTransitionTime":"2026-01-31T05:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.714090 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:13:20.082895627 +0000 UTC Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.736885 5050 scope.go:117] "RemoveContainer" containerID="aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358" Jan 31 05:22:18 crc kubenswrapper[5050]: E0131 05:22:18.737279 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.791176 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.791220 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.791234 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.791255 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.791269 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:18Z","lastTransitionTime":"2026-01-31T05:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.893830 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.893896 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.893914 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.893941 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.894005 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:18Z","lastTransitionTime":"2026-01-31T05:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.997247 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.997305 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.997322 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.997346 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:18 crc kubenswrapper[5050]: I0131 05:22:18.997364 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:18Z","lastTransitionTime":"2026-01-31T05:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.099762 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.099836 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.099854 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.099881 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.099901 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:19Z","lastTransitionTime":"2026-01-31T05:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.203159 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.203215 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.203234 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.203283 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.203301 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:19Z","lastTransitionTime":"2026-01-31T05:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.306006 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.306064 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.306081 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.306106 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.306125 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:19Z","lastTransitionTime":"2026-01-31T05:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.409110 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.409169 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.409186 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.409258 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.409277 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:19Z","lastTransitionTime":"2026-01-31T05:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.512322 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.512369 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.512384 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.512402 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.512415 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:19Z","lastTransitionTime":"2026-01-31T05:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.615512 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.615567 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.615583 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.615603 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.615615 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:19Z","lastTransitionTime":"2026-01-31T05:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.715048 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:08:57.753087041 +0000 UTC Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.718390 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.718434 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.718454 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.718477 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.718494 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:19Z","lastTransitionTime":"2026-01-31T05:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.736307 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.736403 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.736425 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.736336 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:19 crc kubenswrapper[5050]: E0131 05:22:19.736519 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:19 crc kubenswrapper[5050]: E0131 05:22:19.736673 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:19 crc kubenswrapper[5050]: E0131 05:22:19.736819 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:19 crc kubenswrapper[5050]: E0131 05:22:19.736933 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.820916 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.820999 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.821018 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.821043 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.821064 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:19Z","lastTransitionTime":"2026-01-31T05:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.924044 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.924105 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.924122 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.924146 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:19 crc kubenswrapper[5050]: I0131 05:22:19.924163 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:19Z","lastTransitionTime":"2026-01-31T05:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.026582 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.026660 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.026680 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.026711 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.026737 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:20Z","lastTransitionTime":"2026-01-31T05:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.129213 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.129266 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.129283 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.129305 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.129331 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:20Z","lastTransitionTime":"2026-01-31T05:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.236567 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.236618 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.236635 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.236658 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.236674 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:20Z","lastTransitionTime":"2026-01-31T05:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.339043 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.339089 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.339106 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.339127 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.339143 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:20Z","lastTransitionTime":"2026-01-31T05:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.441456 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.441484 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.441493 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.441506 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.441515 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:20Z","lastTransitionTime":"2026-01-31T05:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.543603 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.543636 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.543645 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.543658 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.543668 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:20Z","lastTransitionTime":"2026-01-31T05:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.646047 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.646116 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.646130 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.646168 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.646180 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:20Z","lastTransitionTime":"2026-01-31T05:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.716011 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 01:41:56.793829907 +0000 UTC Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.747719 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.747791 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.747803 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.747824 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.747836 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:20Z","lastTransitionTime":"2026-01-31T05:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.849717 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.849742 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.849751 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.849766 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.849775 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:20Z","lastTransitionTime":"2026-01-31T05:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.951898 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.951946 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.951982 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.952007 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:20 crc kubenswrapper[5050]: I0131 05:22:20.952022 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:20Z","lastTransitionTime":"2026-01-31T05:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.053783 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.053829 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.053841 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.053858 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.053870 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:21Z","lastTransitionTime":"2026-01-31T05:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.155598 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.155629 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.155639 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.155656 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.155667 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:21Z","lastTransitionTime":"2026-01-31T05:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.257534 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.257575 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.257584 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.257598 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.257609 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:21Z","lastTransitionTime":"2026-01-31T05:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.359395 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.359422 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.359432 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.359446 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.359462 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:21Z","lastTransitionTime":"2026-01-31T05:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.461880 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.461920 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.461931 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.461965 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.461975 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:21Z","lastTransitionTime":"2026-01-31T05:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.565110 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.565148 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.565158 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.565172 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.565181 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:21Z","lastTransitionTime":"2026-01-31T05:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.667296 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.667354 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.667371 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.667397 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.667414 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:21Z","lastTransitionTime":"2026-01-31T05:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.716796 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:46:24.730869114 +0000 UTC Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.736171 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.736209 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.736270 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.736183 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:21 crc kubenswrapper[5050]: E0131 05:22:21.736341 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:21 crc kubenswrapper[5050]: E0131 05:22:21.736417 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:21 crc kubenswrapper[5050]: E0131 05:22:21.736500 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:21 crc kubenswrapper[5050]: E0131 05:22:21.736601 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.769079 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.769113 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.769124 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.769137 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.769151 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:21Z","lastTransitionTime":"2026-01-31T05:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.871389 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.871462 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.871479 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.871505 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.871523 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:21Z","lastTransitionTime":"2026-01-31T05:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.973775 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.973820 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.973832 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.973849 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:21 crc kubenswrapper[5050]: I0131 05:22:21.973860 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:21Z","lastTransitionTime":"2026-01-31T05:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.076135 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.076185 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.076200 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.076214 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.076225 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:22Z","lastTransitionTime":"2026-01-31T05:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.179274 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.179317 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.179329 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.179343 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.179354 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:22Z","lastTransitionTime":"2026-01-31T05:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.281856 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.281887 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.281897 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.281910 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.281920 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:22Z","lastTransitionTime":"2026-01-31T05:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.353057 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:22 crc kubenswrapper[5050]: E0131 05:22:22.353193 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:22:22 crc kubenswrapper[5050]: E0131 05:22:22.353259 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs podName:e415fe7d-85f7-4a4f-8683-ffb3a0a8096d nodeName:}" failed. No retries permitted until 2026-01-31 05:22:54.353241955 +0000 UTC m=+99.402403631 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs") pod "network-metrics-daemon-ghk5r" (UID: "e415fe7d-85f7-4a4f-8683-ffb3a0a8096d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.383480 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.383547 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.383569 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.383594 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.383612 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:22Z","lastTransitionTime":"2026-01-31T05:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.485411 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.485444 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.485457 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.485475 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.485487 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:22Z","lastTransitionTime":"2026-01-31T05:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.587585 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.587616 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.587629 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.587644 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.587655 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:22Z","lastTransitionTime":"2026-01-31T05:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.691507 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.691593 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.691612 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.691638 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.691656 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:22Z","lastTransitionTime":"2026-01-31T05:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.717210 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:41:18.104510343 +0000 UTC Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.794647 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.794715 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.794758 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.794779 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.794792 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:22Z","lastTransitionTime":"2026-01-31T05:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.898406 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.898493 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.898514 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.898548 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:22 crc kubenswrapper[5050]: I0131 05:22:22.898570 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:22Z","lastTransitionTime":"2026-01-31T05:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.002155 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.002225 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.002249 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.002347 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.002397 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.092360 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.092420 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.092436 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.092456 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.092468 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: E0131 05:22:23.108541 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.113283 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.113320 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.113331 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.113348 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.113357 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: E0131 05:22:23.132480 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.137274 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.137335 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.137353 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.137400 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.137423 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: E0131 05:22:23.158774 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.163463 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.163520 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.163537 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.163561 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.163577 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: E0131 05:22:23.180712 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.186755 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.187058 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.187406 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.187830 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.188134 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: E0131 05:22:23.209362 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:23Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:23 crc kubenswrapper[5050]: E0131 05:22:23.209580 5050 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.211623 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.211675 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.211691 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.211716 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.211734 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.314125 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.314183 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.314202 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.314226 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.314244 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.417263 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.417314 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.417330 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.417353 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.417372 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.519913 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.520043 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.520067 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.520097 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.520120 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.622434 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.622501 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.622520 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.622546 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.622566 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.718019 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:11:21.5698892 +0000 UTC Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.725435 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.725476 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.725490 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.725510 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.725526 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.736172 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.736191 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.736280 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:23 crc kubenswrapper[5050]: E0131 05:22:23.736444 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.736677 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:23 crc kubenswrapper[5050]: E0131 05:22:23.736800 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:23 crc kubenswrapper[5050]: E0131 05:22:23.737069 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:23 crc kubenswrapper[5050]: E0131 05:22:23.737199 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.828713 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.828777 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.828809 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.828837 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.828857 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.931533 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.931594 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.931684 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.931711 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:23 crc kubenswrapper[5050]: I0131 05:22:23.931796 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:23Z","lastTransitionTime":"2026-01-31T05:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.035018 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.035072 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.035089 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.035116 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.035134 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:24Z","lastTransitionTime":"2026-01-31T05:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.138879 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.139203 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.139448 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.139641 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.139821 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:24Z","lastTransitionTime":"2026-01-31T05:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.244145 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.244189 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.244204 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.244226 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.244244 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:24Z","lastTransitionTime":"2026-01-31T05:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.247331 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgpmd_eeb03b23-b94b-4aaf-aac2-a04db399ec55/kube-multus/0.log" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.247516 5050 generic.go:334] "Generic (PLEG): container finished" podID="eeb03b23-b94b-4aaf-aac2-a04db399ec55" containerID="b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2" exitCode=1 Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.247657 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgpmd" event={"ID":"eeb03b23-b94b-4aaf-aac2-a04db399ec55","Type":"ContainerDied","Data":"b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2"} Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.248349 5050 scope.go:117] "RemoveContainer" containerID="b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.259878 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.272451 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.281496 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.292400 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.304766 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.314931 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.327838 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.338168 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.347840 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.348009 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.348047 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.348132 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.348205 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:24Z","lastTransitionTime":"2026-01-31T05:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.350641 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.361843 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.379889 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.390679 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.403999 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:24Z\\\",\\\"message\\\":\\\"2026-01-31T05:21:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f\\\\n2026-01-31T05:21:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f to /host/opt/cni/bin/\\\\n2026-01-31T05:21:39Z [verbose] multus-daemon started\\\\n2026-01-31T05:21:39Z [verbose] Readiness Indicator file check\\\\n2026-01-31T05:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.419133 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:03.700536 6700 obj_retry.go:551] Creating *factory.egressNode crc took: 2.973663ms\\\\nI0131 05:22:03.700573 6700 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 05:22:03.700616 6700 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 05:22:03.700629 6700 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:22:03.700645 6700 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:22:03.700670 6700 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:22:03.700719 6700 factory.go:656] Stopping watch factory\\\\nI0131 05:22:03.700751 6700 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:22:03.700943 6700 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 05:22:03.701067 6700 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 05:22:03.701111 6700 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:03.701145 6700 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:03.701227 6700 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.429142 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.443434 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.451498 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.451528 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.451539 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.451576 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.451588 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:24Z","lastTransitionTime":"2026-01-31T05:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.458765 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:24Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.555209 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.555256 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.555308 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.555330 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.555346 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:24Z","lastTransitionTime":"2026-01-31T05:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.658687 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.658736 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.658751 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.658773 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.658788 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:24Z","lastTransitionTime":"2026-01-31T05:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.718698 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:48:01.331030332 +0000 UTC Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.761866 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.761906 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.761922 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.761946 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.761999 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:24Z","lastTransitionTime":"2026-01-31T05:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.864851 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.864943 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.865001 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.865027 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.865043 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:24Z","lastTransitionTime":"2026-01-31T05:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.967127 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.967170 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.967187 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.967206 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:24 crc kubenswrapper[5050]: I0131 05:22:24.967220 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:24Z","lastTransitionTime":"2026-01-31T05:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.070174 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.070212 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.070228 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.070245 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.070256 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:25Z","lastTransitionTime":"2026-01-31T05:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.173302 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.173355 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.173380 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.173409 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.173427 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:25Z","lastTransitionTime":"2026-01-31T05:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.253310 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgpmd_eeb03b23-b94b-4aaf-aac2-a04db399ec55/kube-multus/0.log" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.253369 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgpmd" event={"ID":"eeb03b23-b94b-4aaf-aac2-a04db399ec55","Type":"ContainerStarted","Data":"bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966"} Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.270639 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.275727 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.275749 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.275757 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.275770 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.275778 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:25Z","lastTransitionTime":"2026-01-31T05:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.285295 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.296904 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.308119 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.318044 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.327392 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.335710 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.346583 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.358987 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.371101 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:24Z\\\",\\\"message\\\":\\\"2026-01-31T05:21:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f\\\\n2026-01-31T05:21:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f to /host/opt/cni/bin/\\\\n2026-01-31T05:21:39Z [verbose] multus-daemon started\\\\n2026-01-31T05:21:39Z [verbose] Readiness Indicator file check\\\\n2026-01-31T05:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.378122 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.378151 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.378162 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.378180 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.378191 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:25Z","lastTransitionTime":"2026-01-31T05:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.385792 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:03.700536 6700 obj_retry.go:551] Creating *factory.egressNode crc took: 2.973663ms\\\\nI0131 05:22:03.700573 6700 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 05:22:03.700616 6700 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 05:22:03.700629 6700 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:22:03.700645 6700 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:22:03.700670 6700 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:22:03.700719 6700 factory.go:656] Stopping watch factory\\\\nI0131 05:22:03.700751 6700 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:22:03.700943 6700 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 05:22:03.701067 6700 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 05:22:03.701111 6700 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:03.701145 6700 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:03.701227 6700 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.394304 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.405496 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.418678 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.430832 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.444063 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.456820 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.480839 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.480860 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.480867 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.480882 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.480891 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:25Z","lastTransitionTime":"2026-01-31T05:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.582497 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.582517 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.582524 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.582534 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.582542 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:25Z","lastTransitionTime":"2026-01-31T05:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.684840 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.684920 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.684939 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.684980 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.684993 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:25Z","lastTransitionTime":"2026-01-31T05:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.719282 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:23:59.595878163 +0000 UTC Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.735823 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:25 crc kubenswrapper[5050]: E0131 05:22:25.735925 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.736053 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.736115 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:25 crc kubenswrapper[5050]: E0131 05:22:25.736119 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.736225 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:25 crc kubenswrapper[5050]: E0131 05:22:25.736316 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:25 crc kubenswrapper[5050]: E0131 05:22:25.736494 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.752522 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.764870 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.775214 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.783657 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.786845 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.786884 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.786902 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.786925 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.786942 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:25Z","lastTransitionTime":"2026-01-31T05:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.791820 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.801451 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.813738 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.828197 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.846998 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.859411 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:24Z\\\",\\\"message\\\":\\\"2026-01-31T05:21:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f\\\\n2026-01-31T05:21:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f to /host/opt/cni/bin/\\\\n2026-01-31T05:21:39Z [verbose] multus-daemon started\\\\n2026-01-31T05:21:39Z [verbose] Readiness Indicator file check\\\\n2026-01-31T05:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.875986 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:03.700536 6700 obj_retry.go:551] Creating *factory.egressNode crc took: 2.973663ms\\\\nI0131 05:22:03.700573 6700 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 05:22:03.700616 6700 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 05:22:03.700629 6700 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:22:03.700645 6700 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:22:03.700670 6700 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:22:03.700719 6700 factory.go:656] Stopping watch factory\\\\nI0131 05:22:03.700751 6700 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:22:03.700943 6700 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 05:22:03.701067 6700 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 05:22:03.701111 6700 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:03.701145 6700 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:03.701227 6700 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.887813 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.889990 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.890033 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.890051 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.890076 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.890093 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:25Z","lastTransitionTime":"2026-01-31T05:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.900975 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.910991 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.919994 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.930979 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.940941 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:25Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.991647 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.991683 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.991694 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.991708 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:25 crc kubenswrapper[5050]: I0131 05:22:25.991718 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:25Z","lastTransitionTime":"2026-01-31T05:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.094173 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.094200 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.094208 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.094220 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.094229 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:26Z","lastTransitionTime":"2026-01-31T05:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.196080 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.196181 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.196206 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.196236 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.196259 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:26Z","lastTransitionTime":"2026-01-31T05:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.298201 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.298254 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.298270 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.298292 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.298308 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:26Z","lastTransitionTime":"2026-01-31T05:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.400860 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.400902 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.400918 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.400941 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.400984 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:26Z","lastTransitionTime":"2026-01-31T05:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.503342 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.503401 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.503423 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.503453 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.503474 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:26Z","lastTransitionTime":"2026-01-31T05:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.605550 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.605609 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.605626 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.605649 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.605667 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:26Z","lastTransitionTime":"2026-01-31T05:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.708089 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.708133 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.708148 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.708170 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.708187 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:26Z","lastTransitionTime":"2026-01-31T05:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.719604 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:16:16.702576984 +0000 UTC Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.810115 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.810152 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.810164 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.810179 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.810190 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:26Z","lastTransitionTime":"2026-01-31T05:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.913171 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.913218 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.913236 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.913263 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:26 crc kubenswrapper[5050]: I0131 05:22:26.913280 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:26Z","lastTransitionTime":"2026-01-31T05:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.015406 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.015472 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.015490 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.015512 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.015558 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:27Z","lastTransitionTime":"2026-01-31T05:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.117362 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.117394 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.117403 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.117417 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.117425 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:27Z","lastTransitionTime":"2026-01-31T05:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.219483 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.219519 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.219528 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.219542 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.219552 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:27Z","lastTransitionTime":"2026-01-31T05:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.321267 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.321319 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.321337 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.321362 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.321379 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:27Z","lastTransitionTime":"2026-01-31T05:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.423150 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.423187 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.423195 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.423209 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.423218 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:27Z","lastTransitionTime":"2026-01-31T05:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.525427 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.525457 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.525466 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.525477 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.525485 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:27Z","lastTransitionTime":"2026-01-31T05:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.627165 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.627199 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.627209 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.627225 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.627260 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:27Z","lastTransitionTime":"2026-01-31T05:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.720581 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:47:57.119125289 +0000 UTC Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.729214 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.729269 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.729287 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.729312 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.729330 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:27Z","lastTransitionTime":"2026-01-31T05:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.735515 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.735538 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.735582 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.735616 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:27 crc kubenswrapper[5050]: E0131 05:22:27.735677 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:27 crc kubenswrapper[5050]: E0131 05:22:27.735852 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:27 crc kubenswrapper[5050]: E0131 05:22:27.735924 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:27 crc kubenswrapper[5050]: E0131 05:22:27.736065 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.831588 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.831619 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.831627 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.831643 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.831653 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:27Z","lastTransitionTime":"2026-01-31T05:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.933751 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.933809 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.933835 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.933867 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:27 crc kubenswrapper[5050]: I0131 05:22:27.933891 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:27Z","lastTransitionTime":"2026-01-31T05:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.036537 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.036579 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.036590 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.036608 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.036622 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:28Z","lastTransitionTime":"2026-01-31T05:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.138512 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.138552 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.138562 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.138580 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.138591 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:28Z","lastTransitionTime":"2026-01-31T05:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.241411 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.241469 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.241486 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.241511 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.241530 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:28Z","lastTransitionTime":"2026-01-31T05:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.343409 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.343432 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.343441 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.343453 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.343461 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:28Z","lastTransitionTime":"2026-01-31T05:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.446632 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.446658 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.446666 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.446678 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.446686 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:28Z","lastTransitionTime":"2026-01-31T05:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.549414 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.549479 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.549496 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.549521 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.549541 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:28Z","lastTransitionTime":"2026-01-31T05:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.652559 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.652609 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.652626 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.652648 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.652667 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:28Z","lastTransitionTime":"2026-01-31T05:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.721609 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:16:40.082534776 +0000 UTC Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.754811 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.755062 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.755190 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.755323 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.755462 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:28Z","lastTransitionTime":"2026-01-31T05:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.858533 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.858588 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.858605 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.858628 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.858729 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:28Z","lastTransitionTime":"2026-01-31T05:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.960718 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.960760 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.960784 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.960806 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:28 crc kubenswrapper[5050]: I0131 05:22:28.960822 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:28Z","lastTransitionTime":"2026-01-31T05:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.063100 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.063144 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.063154 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.063165 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.063174 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:29Z","lastTransitionTime":"2026-01-31T05:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.165408 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.165440 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.165449 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.165462 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.165471 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:29Z","lastTransitionTime":"2026-01-31T05:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.267171 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.267423 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.267545 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.267656 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.267739 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:29Z","lastTransitionTime":"2026-01-31T05:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.369854 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.369893 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.369902 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.369919 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.369930 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:29Z","lastTransitionTime":"2026-01-31T05:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.472896 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.472932 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.472979 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.473004 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.473020 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:29Z","lastTransitionTime":"2026-01-31T05:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.575281 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.575326 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.575334 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.575350 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.575360 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:29Z","lastTransitionTime":"2026-01-31T05:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.677758 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.677799 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.677813 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.677832 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.677845 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:29Z","lastTransitionTime":"2026-01-31T05:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.722168 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:06:10.909577832 +0000 UTC Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.735540 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.735590 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:29 crc kubenswrapper[5050]: E0131 05:22:29.735691 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.735714 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.735752 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:29 crc kubenswrapper[5050]: E0131 05:22:29.735814 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:29 crc kubenswrapper[5050]: E0131 05:22:29.735915 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:29 crc kubenswrapper[5050]: E0131 05:22:29.736019 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.779536 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.779561 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.779568 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.779579 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.779587 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:29Z","lastTransitionTime":"2026-01-31T05:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.881444 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.881470 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.881480 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.881493 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.881504 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:29Z","lastTransitionTime":"2026-01-31T05:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.983783 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.983827 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.983838 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.983856 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:29 crc kubenswrapper[5050]: I0131 05:22:29.983869 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:29Z","lastTransitionTime":"2026-01-31T05:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.086162 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.086188 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.086195 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.086207 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.086216 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:30Z","lastTransitionTime":"2026-01-31T05:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.188489 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.188548 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.188568 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.188592 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.188612 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:30Z","lastTransitionTime":"2026-01-31T05:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.290189 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.290242 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.290260 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.290283 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.290300 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:30Z","lastTransitionTime":"2026-01-31T05:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.392699 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.392731 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.392739 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.392752 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.392762 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:30Z","lastTransitionTime":"2026-01-31T05:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.495014 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.495070 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.495087 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.495111 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.495131 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:30Z","lastTransitionTime":"2026-01-31T05:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.597425 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.597482 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.597500 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.597521 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.597546 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:30Z","lastTransitionTime":"2026-01-31T05:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.700867 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.700929 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.700946 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.701009 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.701028 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:30Z","lastTransitionTime":"2026-01-31T05:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.722347 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:20:34.285850474 +0000 UTC Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.804341 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.804398 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.804416 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.804438 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.804454 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:30Z","lastTransitionTime":"2026-01-31T05:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.907094 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.907158 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.907178 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.907204 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:30 crc kubenswrapper[5050]: I0131 05:22:30.907224 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:30Z","lastTransitionTime":"2026-01-31T05:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.010352 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.010519 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.010544 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.010572 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.010593 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:31Z","lastTransitionTime":"2026-01-31T05:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.113934 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.114034 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.114053 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.114080 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.114098 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:31Z","lastTransitionTime":"2026-01-31T05:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.217206 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.217290 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.217311 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.217339 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.217359 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:31Z","lastTransitionTime":"2026-01-31T05:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.320246 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.320311 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.320330 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.320356 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.320374 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:31Z","lastTransitionTime":"2026-01-31T05:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.423224 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.423274 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.423287 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.423306 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.423318 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:31Z","lastTransitionTime":"2026-01-31T05:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.539267 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.539332 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.539355 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.539384 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.539405 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:31Z","lastTransitionTime":"2026-01-31T05:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.642189 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.642259 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.642281 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.642311 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.642331 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:31Z","lastTransitionTime":"2026-01-31T05:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.722865 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:36:42.555111524 +0000 UTC Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.736467 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.736824 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.736874 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.737019 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:31 crc kubenswrapper[5050]: E0131 05:22:31.737092 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.737320 5050 scope.go:117] "RemoveContainer" containerID="aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358" Jan 31 05:22:31 crc kubenswrapper[5050]: E0131 05:22:31.737376 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:31 crc kubenswrapper[5050]: E0131 05:22:31.737435 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:31 crc kubenswrapper[5050]: E0131 05:22:31.737533 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.744507 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.744759 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.745028 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.745236 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.745480 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:31Z","lastTransitionTime":"2026-01-31T05:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.848600 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.848655 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.848671 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.848693 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.848710 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:31Z","lastTransitionTime":"2026-01-31T05:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.951752 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.951803 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.951821 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.951847 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:31 crc kubenswrapper[5050]: I0131 05:22:31.951867 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:31Z","lastTransitionTime":"2026-01-31T05:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.054632 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.054698 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.054715 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.054743 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.054765 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:32Z","lastTransitionTime":"2026-01-31T05:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.157255 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.157339 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.157356 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.157411 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.157432 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:32Z","lastTransitionTime":"2026-01-31T05:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.261338 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.261382 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.261414 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.261432 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.261444 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:32Z","lastTransitionTime":"2026-01-31T05:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.316784 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/2.log" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.320307 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284"} Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.320907 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.335974 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.361370 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.363208 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.363267 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.363285 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.363311 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.363332 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:32Z","lastTransitionTime":"2026-01-31T05:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.380635 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.394765 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.407863 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.421432 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.432469 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.445762 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.466067 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.466115 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.466130 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.466150 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.466163 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:32Z","lastTransitionTime":"2026-01-31T05:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.471427 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.487551 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.503283 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.521239 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:24Z\\\",\\\"message\\\":\\\"2026-01-31T05:21:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f\\\\n2026-01-31T05:21:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f to /host/opt/cni/bin/\\\\n2026-01-31T05:21:39Z [verbose] multus-daemon started\\\\n2026-01-31T05:21:39Z [verbose] Readiness Indicator file check\\\\n2026-01-31T05:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.550239 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:03.700536 6700 obj_retry.go:551] Creating *factory.egressNode crc took: 2.973663ms\\\\nI0131 05:22:03.700573 6700 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 05:22:03.700616 6700 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 05:22:03.700629 6700 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:22:03.700645 6700 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:22:03.700670 6700 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:22:03.700719 6700 factory.go:656] Stopping watch factory\\\\nI0131 05:22:03.700751 6700 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:22:03.700943 6700 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 05:22:03.701067 6700 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 05:22:03.701111 6700 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:03.701145 6700 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:03.701227 6700 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.564384 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.568459 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.568504 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.568521 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.568543 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.568561 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:32Z","lastTransitionTime":"2026-01-31T05:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.580361 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.594352 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.611117 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:32Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.670774 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.670835 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.670852 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.670904 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.670921 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:32Z","lastTransitionTime":"2026-01-31T05:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.723473 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:48:01.07847863 +0000 UTC Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.773347 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.773403 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.773417 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.773437 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.773448 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:32Z","lastTransitionTime":"2026-01-31T05:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.876075 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.876142 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.876158 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.876183 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.876199 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:32Z","lastTransitionTime":"2026-01-31T05:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.978317 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.978377 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.978400 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.978429 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:32 crc kubenswrapper[5050]: I0131 05:22:32.978449 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:32Z","lastTransitionTime":"2026-01-31T05:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.080784 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.080826 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.080844 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.080863 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.080879 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.183652 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.183756 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.183776 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.183802 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.183819 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.286617 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.286706 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.286724 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.286748 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.286767 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.328454 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/3.log" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.329494 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/2.log" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.333433 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284" exitCode=1 Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.333493 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.333562 5050 scope.go:117] "RemoveContainer" containerID="aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.334359 5050 scope.go:117] "RemoveContainer" containerID="85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284" Jan 31 05:22:33 crc kubenswrapper[5050]: E0131 05:22:33.334597 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.353805 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.377376 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.389004 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.389230 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.389417 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.389594 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.389772 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.393437 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.410747 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.424464 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.437016 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.437695 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.437735 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.437747 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.437765 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.437776 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.452454 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: E0131 05:22:33.461331 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.464324 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.464375 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.464392 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.464414 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.464432 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.470246 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: E0131 05:22:33.484225 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.488131 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.488186 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.488204 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.488227 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.488243 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.488631 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: E0131 05:22:33.503476 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.507011 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.507522 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.507570 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.507588 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.507612 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.507629 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.520880 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: E0131 05:22:33.526176 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.531337 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.531385 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.531402 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.531423 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.531439 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.536612 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: E0131 05:22:33.550211 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: E0131 05:22:33.550464 5050 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.552481 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.552539 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.552556 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.552584 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.552601 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.553619 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.568539 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.585416 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.603866 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:24Z\\\",\\\"message\\\":\\\"2026-01-31T05:21:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f\\\\n2026-01-31T05:21:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f to /host/opt/cni/bin/\\\\n2026-01-31T05:21:39Z [verbose] multus-daemon started\\\\n2026-01-31T05:21:39Z [verbose] Readiness Indicator file check\\\\n2026-01-31T05:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.632911 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaca4a2b683824d0b6851d173a1e5fb7ee4264fc1741c9e15635789efe09358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:03.700536 6700 obj_retry.go:551] Creating *factory.egressNode crc took: 2.973663ms\\\\nI0131 05:22:03.700573 6700 factory.go:1336] Added *v1.Node event handler 7\\\\nI0131 05:22:03.700616 6700 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0131 05:22:03.700629 6700 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 05:22:03.700645 6700 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 05:22:03.700670 6700 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 05:22:03.700719 6700 factory.go:656] Stopping watch factory\\\\nI0131 05:22:03.700751 6700 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 05:22:03.700943 6700 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 05:22:03.701067 6700 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 05:22:03.701111 6700 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:03.701145 6700 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:03.701227 6700 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:32Z\\\",\\\"message\\\":\\\".org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.250:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de88cb48-af91-44f8-b3c0-73dcf8201ba5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:32.744559 7094 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:32.744558 7094 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0131 05:22:32.744574 7094 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 1.137459ms\\\\nI0131 05:22:32.744587 7094 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:32.744694 7094 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:33Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.655150 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.655201 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.655220 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.655245 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.655264 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.724122 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 06:24:10.607732972 +0000 UTC Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.735643 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.735690 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.735720 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.735795 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:33 crc kubenswrapper[5050]: E0131 05:22:33.735798 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:33 crc kubenswrapper[5050]: E0131 05:22:33.735930 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:33 crc kubenswrapper[5050]: E0131 05:22:33.736077 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:33 crc kubenswrapper[5050]: E0131 05:22:33.736283 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.757335 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.757377 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.757400 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.757427 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.757449 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.860470 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.860557 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.860577 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.860604 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.860623 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.964128 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.964208 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.964253 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.964297 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:33 crc kubenswrapper[5050]: I0131 05:22:33.964315 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:33Z","lastTransitionTime":"2026-01-31T05:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.067775 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.067834 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.067849 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.067872 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.067889 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:34Z","lastTransitionTime":"2026-01-31T05:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.171104 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.171176 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.171194 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.171225 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.171243 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:34Z","lastTransitionTime":"2026-01-31T05:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.273791 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.273996 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.274022 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.274044 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.274062 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:34Z","lastTransitionTime":"2026-01-31T05:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.340377 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/3.log" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.346827 5050 scope.go:117] "RemoveContainer" containerID="85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284" Jan 31 05:22:34 crc kubenswrapper[5050]: E0131 05:22:34.347115 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.366766 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.376830 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.376888 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.376906 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.376932 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.376976 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:34Z","lastTransitionTime":"2026-01-31T05:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.387406 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:24Z\\\",\\\"message\\\":\\\"2026-01-31T05:21:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f\\\\n2026-01-31T05:21:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f to /host/opt/cni/bin/\\\\n2026-01-31T05:21:39Z [verbose] multus-daemon started\\\\n2026-01-31T05:21:39Z [verbose] Readiness Indicator file check\\\\n2026-01-31T05:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.417658 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:32Z\\\",\\\"message\\\":\\\".org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.250:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de88cb48-af91-44f8-b3c0-73dcf8201ba5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:32.744559 7094 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:32.744558 7094 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0131 05:22:32.744574 7094 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 1.137459ms\\\\nI0131 05:22:32.744587 7094 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:32.744694 7094 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.432613 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.452817 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.472342 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.480177 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.480230 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.480242 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.480265 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.480279 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:34Z","lastTransitionTime":"2026-01-31T05:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.496296 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.514139 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.530941 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.548736 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.568682 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.587301 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.589353 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.589413 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.589430 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.589458 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.589475 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:34Z","lastTransitionTime":"2026-01-31T05:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.602288 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.616579 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.632540 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.653570 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.671709 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:34Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.692444 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.692494 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.692510 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.692534 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.692551 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:34Z","lastTransitionTime":"2026-01-31T05:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.725125 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 00:57:59.782708142 +0000 UTC Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.795739 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.795790 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.795806 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.795855 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.795873 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:34Z","lastTransitionTime":"2026-01-31T05:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.899502 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.899578 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.899600 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.899631 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:34 crc kubenswrapper[5050]: I0131 05:22:34.899656 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:34Z","lastTransitionTime":"2026-01-31T05:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.002136 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.002181 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.002196 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.002211 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.002223 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:35Z","lastTransitionTime":"2026-01-31T05:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.105186 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.105283 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.105303 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.105330 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.105346 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:35Z","lastTransitionTime":"2026-01-31T05:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.208025 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.208098 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.208124 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.208155 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.208177 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:35Z","lastTransitionTime":"2026-01-31T05:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.311020 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.311057 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.311068 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.311084 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.311096 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:35Z","lastTransitionTime":"2026-01-31T05:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.413384 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.413449 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.413472 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.413501 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.413523 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:35Z","lastTransitionTime":"2026-01-31T05:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.517271 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.517375 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.517402 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.517430 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.517452 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:35Z","lastTransitionTime":"2026-01-31T05:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.620151 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.620215 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.620239 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.620268 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.620292 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:35Z","lastTransitionTime":"2026-01-31T05:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.722782 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.722831 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.722849 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.722871 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.722888 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:35Z","lastTransitionTime":"2026-01-31T05:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.726241 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:18:29.852953281 +0000 UTC Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.736077 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.736107 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.736207 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:35 crc kubenswrapper[5050]: E0131 05:22:35.736407 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.736438 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:35 crc kubenswrapper[5050]: E0131 05:22:35.736569 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:35 crc kubenswrapper[5050]: E0131 05:22:35.736687 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:35 crc kubenswrapper[5050]: E0131 05:22:35.736833 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.766750 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:32Z\\\",\\\"message\\\":\\\".org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.250:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de88cb48-af91-44f8-b3c0-73dcf8201ba5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:32.744559 7094 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:32.744558 7094 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0131 05:22:32.744574 7094 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 1.137459ms\\\\nI0131 05:22:32.744587 7094 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:32.744694 7094 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.783910 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.803311 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.825579 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.826300 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.826392 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.826411 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.826436 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.826454 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:35Z","lastTransitionTime":"2026-01-31T05:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.851378 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:24Z\\\",\\\"message\\\":\\\"2026-01-31T05:21:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f\\\\n2026-01-31T05:21:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f to /host/opt/cni/bin/\\\\n2026-01-31T05:21:39Z [verbose] multus-daemon started\\\\n2026-01-31T05:21:39Z [verbose] Readiness Indicator file check\\\\n2026-01-31T05:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.874647 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.900545 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.919112 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.929886 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.929982 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.929999 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.930017 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.930029 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:35Z","lastTransitionTime":"2026-01-31T05:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.932899 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.947216 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.961134 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.975486 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:35 crc kubenswrapper[5050]: I0131 05:22:35.991108 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:35Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.010471 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.029759 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.032822 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.032866 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.032885 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.032910 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.032927 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:36Z","lastTransitionTime":"2026-01-31T05:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.048990 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.065137 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:36Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.135821 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.135917 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.136009 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.136036 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.136054 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:36Z","lastTransitionTime":"2026-01-31T05:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.238173 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.238216 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.238227 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.238245 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.238256 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:36Z","lastTransitionTime":"2026-01-31T05:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.341734 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.341791 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.341808 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.341843 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.341861 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:36Z","lastTransitionTime":"2026-01-31T05:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.445624 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.445725 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.445744 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.445816 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.445835 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:36Z","lastTransitionTime":"2026-01-31T05:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.548556 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.548618 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.548634 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.548658 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.548679 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:36Z","lastTransitionTime":"2026-01-31T05:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.652039 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.652150 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.652177 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.652205 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.652227 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:36Z","lastTransitionTime":"2026-01-31T05:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.727335 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:00:51.5953864 +0000 UTC Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.754802 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.754850 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.754868 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.754920 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.754938 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:36Z","lastTransitionTime":"2026-01-31T05:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.857539 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.857597 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.857616 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.857640 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.857657 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:36Z","lastTransitionTime":"2026-01-31T05:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.960808 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.961003 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.961039 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.961070 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:36 crc kubenswrapper[5050]: I0131 05:22:36.961092 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:36Z","lastTransitionTime":"2026-01-31T05:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.063375 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.063494 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.063565 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.063597 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.063617 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:37Z","lastTransitionTime":"2026-01-31T05:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.166138 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.166205 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.166224 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.166257 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.166276 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:37Z","lastTransitionTime":"2026-01-31T05:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.269082 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.269133 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.269150 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.269174 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.269191 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:37Z","lastTransitionTime":"2026-01-31T05:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.371544 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.371632 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.371655 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.371682 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.371699 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:37Z","lastTransitionTime":"2026-01-31T05:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.481419 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.481473 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.481506 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.481535 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.481554 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:37Z","lastTransitionTime":"2026-01-31T05:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.584719 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.584781 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.584799 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.584824 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.584841 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:37Z","lastTransitionTime":"2026-01-31T05:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.687880 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.687920 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.687935 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.687992 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.688009 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:37Z","lastTransitionTime":"2026-01-31T05:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.728128 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:34:23.70843143 +0000 UTC Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.735890 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.735985 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:37 crc kubenswrapper[5050]: E0131 05:22:37.736081 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.736180 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.736269 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:37 crc kubenswrapper[5050]: E0131 05:22:37.736404 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:37 crc kubenswrapper[5050]: E0131 05:22:37.736530 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:37 crc kubenswrapper[5050]: E0131 05:22:37.736814 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.790933 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.791010 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.791029 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.791053 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.791071 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:37Z","lastTransitionTime":"2026-01-31T05:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.893725 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.893782 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.893799 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.893824 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.893842 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:37Z","lastTransitionTime":"2026-01-31T05:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.996754 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.996803 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.996821 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.996846 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:37 crc kubenswrapper[5050]: I0131 05:22:37.996864 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:37Z","lastTransitionTime":"2026-01-31T05:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.099512 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.099558 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.099575 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.099598 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.099615 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:38Z","lastTransitionTime":"2026-01-31T05:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.202079 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.202127 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.202144 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.202167 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.202184 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:38Z","lastTransitionTime":"2026-01-31T05:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.306482 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.306919 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.306938 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.306999 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.307021 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:38Z","lastTransitionTime":"2026-01-31T05:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.410047 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.410103 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.410144 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.410168 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.410186 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:38Z","lastTransitionTime":"2026-01-31T05:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.513434 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.513489 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.513506 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.513529 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.513547 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:38Z","lastTransitionTime":"2026-01-31T05:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.615897 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.616001 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.616025 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.616056 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.616079 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:38Z","lastTransitionTime":"2026-01-31T05:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.718863 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.718927 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.718945 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.719007 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.719026 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:38Z","lastTransitionTime":"2026-01-31T05:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.729111 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:32:19.073056819 +0000 UTC Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.822229 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.822325 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.822345 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.822368 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.822385 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:38Z","lastTransitionTime":"2026-01-31T05:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.925077 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.925145 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.925163 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.925189 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:38 crc kubenswrapper[5050]: I0131 05:22:38.925206 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:38Z","lastTransitionTime":"2026-01-31T05:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.028250 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.028308 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.028325 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.028347 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.028364 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:39Z","lastTransitionTime":"2026-01-31T05:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.130710 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.130778 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.130796 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.130828 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.130847 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:39Z","lastTransitionTime":"2026-01-31T05:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.233793 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.233849 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.233866 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.233888 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.233905 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:39Z","lastTransitionTime":"2026-01-31T05:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.337085 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.337142 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.337166 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.337196 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.337253 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:39Z","lastTransitionTime":"2026-01-31T05:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.440228 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.440305 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.440323 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.440350 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.440370 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:39Z","lastTransitionTime":"2026-01-31T05:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.544000 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.544091 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.544109 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.544136 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.544156 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:39Z","lastTransitionTime":"2026-01-31T05:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.642794 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.643045 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.642996984 +0000 UTC m=+148.692158610 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.643117 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.643388 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.643515 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.643485496 +0000 UTC m=+148.692647102 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.648639 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.648765 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.648786 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.648810 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.648827 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:39Z","lastTransitionTime":"2026-01-31T05:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.729844 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:39:34.665903709 +0000 UTC Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.736200 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.736278 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.736517 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.736210 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.736697 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.736768 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.737017 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.737205 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.743700 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.743784 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.743832 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.743978 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.743997 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.744016 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.744036 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.744086 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.744060942 +0000 UTC m=+148.793222578 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.744120 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.744096973 +0000 UTC m=+148.793258599 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.744171 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.744212 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.744276 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:22:39 crc kubenswrapper[5050]: E0131 05:22:39.744369 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.744337809 +0000 UTC m=+148.793499445 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.751486 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.751546 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.751566 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.751590 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.751609 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:39Z","lastTransitionTime":"2026-01-31T05:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.855047 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.855109 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.855133 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.855162 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.855182 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:39Z","lastTransitionTime":"2026-01-31T05:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.960578 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.960621 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.960631 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.960652 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:39 crc kubenswrapper[5050]: I0131 05:22:39.960663 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:39Z","lastTransitionTime":"2026-01-31T05:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.063806 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.063869 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.063895 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.063933 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.064003 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:40Z","lastTransitionTime":"2026-01-31T05:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.167358 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.167419 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.167434 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.167454 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.167469 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:40Z","lastTransitionTime":"2026-01-31T05:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.270966 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.271022 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.271033 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.271050 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.271060 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:40Z","lastTransitionTime":"2026-01-31T05:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.373421 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.373452 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.373461 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.373473 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.373482 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:40Z","lastTransitionTime":"2026-01-31T05:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.477805 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.477872 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.477890 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.477920 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.477938 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:40Z","lastTransitionTime":"2026-01-31T05:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.581317 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.581356 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.581369 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.581384 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.581393 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:40Z","lastTransitionTime":"2026-01-31T05:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.685944 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.686030 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.686047 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.686073 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.686096 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:40Z","lastTransitionTime":"2026-01-31T05:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.730542 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:35:16.414012321 +0000 UTC Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.788826 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.788881 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.788899 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.788926 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.788946 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:40Z","lastTransitionTime":"2026-01-31T05:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.892298 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.892351 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.892367 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.892389 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.892408 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:40Z","lastTransitionTime":"2026-01-31T05:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.995244 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.995297 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.995313 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.995336 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:40 crc kubenswrapper[5050]: I0131 05:22:40.995353 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:40Z","lastTransitionTime":"2026-01-31T05:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.098771 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.098842 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.098859 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.098882 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.098900 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:41Z","lastTransitionTime":"2026-01-31T05:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.201721 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.201784 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.201801 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.201826 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.201846 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:41Z","lastTransitionTime":"2026-01-31T05:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.305139 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.305191 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.305207 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.305228 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.305244 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:41Z","lastTransitionTime":"2026-01-31T05:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.408747 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.408812 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.408829 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.408859 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.408878 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:41Z","lastTransitionTime":"2026-01-31T05:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.511865 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.511943 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.512010 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.512042 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.512065 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:41Z","lastTransitionTime":"2026-01-31T05:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.615320 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.615388 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.615414 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.615448 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.615472 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:41Z","lastTransitionTime":"2026-01-31T05:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.718429 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.718535 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.718553 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.718579 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.718597 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:41Z","lastTransitionTime":"2026-01-31T05:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.731696 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:16:35.723722791 +0000 UTC Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.736922 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:41 crc kubenswrapper[5050]: E0131 05:22:41.737080 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.737315 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:41 crc kubenswrapper[5050]: E0131 05:22:41.737415 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.737626 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:41 crc kubenswrapper[5050]: E0131 05:22:41.737732 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.737929 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:41 crc kubenswrapper[5050]: E0131 05:22:41.738056 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.821713 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.821761 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.821928 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.821984 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.822005 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:41Z","lastTransitionTime":"2026-01-31T05:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.924787 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.924868 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.924885 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.924905 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:41 crc kubenswrapper[5050]: I0131 05:22:41.925007 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:41Z","lastTransitionTime":"2026-01-31T05:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.027687 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.027729 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.027747 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.027767 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.027782 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:42Z","lastTransitionTime":"2026-01-31T05:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.131079 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.131165 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.131192 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.131226 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.131253 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:42Z","lastTransitionTime":"2026-01-31T05:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.235013 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.235185 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.235207 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.235231 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.235285 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:42Z","lastTransitionTime":"2026-01-31T05:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.339113 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.339189 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.339208 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.339235 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.339254 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:42Z","lastTransitionTime":"2026-01-31T05:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.442849 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.442923 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.442940 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.443017 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.443044 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:42Z","lastTransitionTime":"2026-01-31T05:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.545813 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.545870 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.545886 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.545909 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.545926 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:42Z","lastTransitionTime":"2026-01-31T05:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.648917 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.649027 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.649046 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.649072 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.649089 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:42Z","lastTransitionTime":"2026-01-31T05:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.731998 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:45:08.983892098 +0000 UTC Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.752808 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.752856 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.752874 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.752899 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.752915 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:42Z","lastTransitionTime":"2026-01-31T05:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.855594 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.855654 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.855672 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.855718 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.855736 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:42Z","lastTransitionTime":"2026-01-31T05:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.958829 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.958908 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.958932 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.958997 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:42 crc kubenswrapper[5050]: I0131 05:22:42.959024 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:42Z","lastTransitionTime":"2026-01-31T05:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.062361 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.062436 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.062455 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.062480 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.062500 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.164673 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.164705 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.164716 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.164733 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.164744 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.267748 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.267882 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.267899 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.267923 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.267939 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.371171 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.371543 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.371695 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.371890 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.372086 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.476217 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.476285 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.476303 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.476332 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.476348 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.580245 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.580285 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.580301 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.580322 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.580337 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.683052 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.683112 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.683129 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.683155 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.683174 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.732781 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:55:15.211805008 +0000 UTC Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.736174 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.736281 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:43 crc kubenswrapper[5050]: E0131 05:22:43.736536 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.736606 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.736628 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:43 crc kubenswrapper[5050]: E0131 05:22:43.736736 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:43 crc kubenswrapper[5050]: E0131 05:22:43.736931 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:43 crc kubenswrapper[5050]: E0131 05:22:43.737139 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.757174 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.785366 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.785427 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.785446 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.785473 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.785493 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.871455 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.871762 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.871786 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.871820 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.871839 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: E0131 05:22:43.898633 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.903056 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.903116 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.903136 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.903160 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.903177 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: E0131 05:22:43.923431 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.929783 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.929863 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.929883 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.929909 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.929927 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: E0131 05:22:43.950906 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.956146 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.956193 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.956212 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.956237 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.956253 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:43 crc kubenswrapper[5050]: E0131 05:22:43.976439 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:43Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.981104 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.981156 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.981175 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.981199 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:43 crc kubenswrapper[5050]: I0131 05:22:43.981218 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:43Z","lastTransitionTime":"2026-01-31T05:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:44 crc kubenswrapper[5050]: E0131 05:22:44.011679 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:44Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:44 crc kubenswrapper[5050]: E0131 05:22:44.011917 5050 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.014166 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.014230 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.014254 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.014282 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.014305 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:44Z","lastTransitionTime":"2026-01-31T05:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.117551 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.117617 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.117634 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.117661 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.117679 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:44Z","lastTransitionTime":"2026-01-31T05:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.220403 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.220467 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.220489 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.220514 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.220532 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:44Z","lastTransitionTime":"2026-01-31T05:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.325024 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.325093 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.325112 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.325138 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.325159 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:44Z","lastTransitionTime":"2026-01-31T05:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.427833 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.427890 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.427905 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.427924 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.427939 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:44Z","lastTransitionTime":"2026-01-31T05:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.530401 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.530470 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.530488 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.530516 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.530534 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:44Z","lastTransitionTime":"2026-01-31T05:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.633718 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.633786 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.633804 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.633829 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.633848 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:44Z","lastTransitionTime":"2026-01-31T05:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.733850 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:33:17.390415835 +0000 UTC Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.737758 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.738019 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.738057 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.738145 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.738219 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:44Z","lastTransitionTime":"2026-01-31T05:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.842045 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.842110 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.842192 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.842220 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.842239 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:44Z","lastTransitionTime":"2026-01-31T05:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.944607 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.944667 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.944685 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.944713 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:44 crc kubenswrapper[5050]: I0131 05:22:44.944731 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:44Z","lastTransitionTime":"2026-01-31T05:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.047773 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.047839 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.047859 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.047884 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.047904 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:45Z","lastTransitionTime":"2026-01-31T05:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.151347 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.151414 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.151433 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.151464 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.151492 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:45Z","lastTransitionTime":"2026-01-31T05:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.255009 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.255050 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.255064 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.255082 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.255096 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:45Z","lastTransitionTime":"2026-01-31T05:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.357739 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.357790 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.357801 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.357824 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.357839 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:45Z","lastTransitionTime":"2026-01-31T05:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.460862 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.460927 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.460938 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.460977 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.460992 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:45Z","lastTransitionTime":"2026-01-31T05:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.564310 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.564371 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.564392 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.564416 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.564433 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:45Z","lastTransitionTime":"2026-01-31T05:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.667095 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.667163 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.667187 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.667220 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.667243 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:45Z","lastTransitionTime":"2026-01-31T05:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.734037 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:19:14.663589925 +0000 UTC Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.735359 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.735396 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.735415 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:45 crc kubenswrapper[5050]: E0131 05:22:45.735521 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.735554 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:45 crc kubenswrapper[5050]: E0131 05:22:45.735693 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:45 crc kubenswrapper[5050]: E0131 05:22:45.735890 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:45 crc kubenswrapper[5050]: E0131 05:22:45.736238 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.754430 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.770268 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.770316 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.770334 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.770358 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.770377 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:45Z","lastTransitionTime":"2026-01-31T05:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.772077 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.786778 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.801549 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.819132 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.842083 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.863848 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.872849 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.872897 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.872919 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.872981 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.873008 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:45Z","lastTransitionTime":"2026-01-31T05:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.884343 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.903480 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.920992 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.939834 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.970932 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ef5b634-fec8-410b-9bcf-fb115fe54c36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb972f9fdac10faa54b50a9219d070fa279646e9ee0e36618f77bc5dc254566c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e61a76ed8a8277321659bfeb4ba1ff0a3a8e2f2ba87f478b9a4ceb89afa59c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae13eecc86b51cc95284d3b3fc12359d2e2568ba76275c43562b99c1527b14e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe6452db8a61013ca3bda0a2d3a43003ee7151a412927d8bfe779796d2af708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e56aaf7d76d5d8e22bd63b2f543c9d69526ee0f4f704fdf93f230299d0d9f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6dcec9ec40aed9a03eac63c87fc2e15afc66ead30ede2616563482f356a508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b6dcec9ec40aed9a03eac63c87fc2e15afc66ead30ede2616563482f356a508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f84e4ecfa0bd44da2b5068a836a1f208e0f49db5d54aadf7b2d6f9a2d997ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f84e4ecfa0bd44da2b5068a836a1f208e0f49db5d54aadf7b2d6f9a2d997ed2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2ce391e182a2d1e4561d24243dcbffe1fe282bfd6559836365acdea77c40290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2ce391e182a2d1e4561d24243dcbffe1fe282bfd6559836365acdea77c40290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.976690 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.980420 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.980450 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.980484 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.980505 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:45Z","lastTransitionTime":"2026-01-31T05:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:45 crc kubenswrapper[5050]: I0131 05:22:45.992999 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:45Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.012775 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:46Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.029728 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:24Z\\\",\\\"message\\\":\\\"2026-01-31T05:21:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f\\\\n2026-01-31T05:21:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f to /host/opt/cni/bin/\\\\n2026-01-31T05:21:39Z [verbose] multus-daemon started\\\\n2026-01-31T05:21:39Z [verbose] Readiness Indicator file check\\\\n2026-01-31T05:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:46Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.051709 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:32Z\\\",\\\"message\\\":\\\".org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.250:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de88cb48-af91-44f8-b3c0-73dcf8201ba5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:32.744559 7094 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:32.744558 7094 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0131 05:22:32.744574 7094 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 1.137459ms\\\\nI0131 05:22:32.744587 7094 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:32.744694 7094 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:46Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.071374 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:46Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.084151 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.084233 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.084261 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.084295 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.084321 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:46Z","lastTransitionTime":"2026-01-31T05:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.094827 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:46Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.187422 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.187475 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.187495 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.187519 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.187536 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:46Z","lastTransitionTime":"2026-01-31T05:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.290303 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.290406 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.290432 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.290463 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.290485 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:46Z","lastTransitionTime":"2026-01-31T05:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.393093 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.393146 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.393163 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.393186 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.393204 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:46Z","lastTransitionTime":"2026-01-31T05:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.496416 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.496475 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.496492 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.496518 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.496535 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:46Z","lastTransitionTime":"2026-01-31T05:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.599051 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.599108 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.599120 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.599143 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.599157 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:46Z","lastTransitionTime":"2026-01-31T05:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.703450 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.703507 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.703523 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.703548 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.703567 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:46Z","lastTransitionTime":"2026-01-31T05:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.734587 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:36:48.995268949 +0000 UTC Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.736909 5050 scope.go:117] "RemoveContainer" containerID="85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284" Jan 31 05:22:46 crc kubenswrapper[5050]: E0131 05:22:46.737191 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.806101 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.806236 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.806257 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.806278 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.806296 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:46Z","lastTransitionTime":"2026-01-31T05:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.909500 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.909558 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.909575 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.909598 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:46 crc kubenswrapper[5050]: I0131 05:22:46.909616 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:46Z","lastTransitionTime":"2026-01-31T05:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.013268 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.013340 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.013360 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.013384 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.013401 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:47Z","lastTransitionTime":"2026-01-31T05:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.116994 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.117065 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.117083 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.117112 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.117130 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:47Z","lastTransitionTime":"2026-01-31T05:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.220535 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.220579 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.220596 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.220621 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.220639 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:47Z","lastTransitionTime":"2026-01-31T05:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.323430 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.323462 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.323479 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.323499 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.323514 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:47Z","lastTransitionTime":"2026-01-31T05:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.426741 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.426861 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.426889 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.426925 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.426945 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:47Z","lastTransitionTime":"2026-01-31T05:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.529876 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.529992 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.530016 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.530047 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.530067 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:47Z","lastTransitionTime":"2026-01-31T05:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.632997 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.633074 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.633096 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.633129 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.633151 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:47Z","lastTransitionTime":"2026-01-31T05:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.734995 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:19:00.007027403 +0000 UTC Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.735566 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.735575 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.736220 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.736255 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.736271 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.736295 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.736312 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:47Z","lastTransitionTime":"2026-01-31T05:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:47 crc kubenswrapper[5050]: E0131 05:22:47.736434 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.736530 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.736599 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:47 crc kubenswrapper[5050]: E0131 05:22:47.736727 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:47 crc kubenswrapper[5050]: E0131 05:22:47.737263 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:47 crc kubenswrapper[5050]: E0131 05:22:47.737520 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.752909 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.839685 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.839756 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.839785 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.839820 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.839845 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:47Z","lastTransitionTime":"2026-01-31T05:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.942977 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.943027 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.943044 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.943069 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:47 crc kubenswrapper[5050]: I0131 05:22:47.943090 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:47Z","lastTransitionTime":"2026-01-31T05:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.047164 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.047285 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.047306 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.047356 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.047379 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:48Z","lastTransitionTime":"2026-01-31T05:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.150241 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.150333 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.150355 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.150397 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.150447 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:48Z","lastTransitionTime":"2026-01-31T05:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.254555 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.254615 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.254635 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.254662 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.254682 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:48Z","lastTransitionTime":"2026-01-31T05:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.357442 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.357479 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.357490 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.357506 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.357518 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:48Z","lastTransitionTime":"2026-01-31T05:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.460636 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.460727 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.460747 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.460770 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.460787 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:48Z","lastTransitionTime":"2026-01-31T05:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.563414 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.563460 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.563474 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.563495 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.563510 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:48Z","lastTransitionTime":"2026-01-31T05:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.667305 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.667382 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.667406 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.667439 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.667462 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:48Z","lastTransitionTime":"2026-01-31T05:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.735450 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:32:43.875504363 +0000 UTC Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.770677 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.770743 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.770762 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.770786 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.770804 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:48Z","lastTransitionTime":"2026-01-31T05:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.874329 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.874426 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.874447 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.874506 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.874524 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:48Z","lastTransitionTime":"2026-01-31T05:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.978278 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.978347 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.978363 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.978389 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:48 crc kubenswrapper[5050]: I0131 05:22:48.978406 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:48Z","lastTransitionTime":"2026-01-31T05:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.081641 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.081703 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.081720 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.081745 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.081764 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:49Z","lastTransitionTime":"2026-01-31T05:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.185342 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.185400 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.185432 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.185459 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.185479 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:49Z","lastTransitionTime":"2026-01-31T05:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.288821 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.288882 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.288899 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.288923 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.288995 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:49Z","lastTransitionTime":"2026-01-31T05:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.392406 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.392465 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.392483 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.392509 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.392526 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:49Z","lastTransitionTime":"2026-01-31T05:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.495999 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.496100 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.496119 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.496210 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.496246 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:49Z","lastTransitionTime":"2026-01-31T05:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.599321 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.599452 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.599476 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.599513 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.599538 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:49Z","lastTransitionTime":"2026-01-31T05:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.702884 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.702923 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.702933 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.702973 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.702984 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:49Z","lastTransitionTime":"2026-01-31T05:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.736177 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:02:08.091436361 +0000 UTC Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.736539 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.736628 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:49 crc kubenswrapper[5050]: E0131 05:22:49.736767 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.736833 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:49 crc kubenswrapper[5050]: E0131 05:22:49.737060 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:49 crc kubenswrapper[5050]: E0131 05:22:49.737184 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.739063 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:49 crc kubenswrapper[5050]: E0131 05:22:49.739523 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.805725 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.805788 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.805804 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.805829 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.805847 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:49Z","lastTransitionTime":"2026-01-31T05:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.908475 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.908516 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.908533 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.908559 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:49 crc kubenswrapper[5050]: I0131 05:22:49.908576 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:49Z","lastTransitionTime":"2026-01-31T05:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.011316 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.011391 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.011471 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.011507 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.011532 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:50Z","lastTransitionTime":"2026-01-31T05:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.115626 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.115702 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.115726 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.115757 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.115782 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:50Z","lastTransitionTime":"2026-01-31T05:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.219050 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.219116 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.219139 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.219174 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.219194 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:50Z","lastTransitionTime":"2026-01-31T05:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.323204 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.323274 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.323294 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.323318 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.323335 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:50Z","lastTransitionTime":"2026-01-31T05:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.427242 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.427304 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.427322 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.427348 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.427371 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:50Z","lastTransitionTime":"2026-01-31T05:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.530280 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.530329 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.530346 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.530371 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.530389 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:50Z","lastTransitionTime":"2026-01-31T05:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.633515 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.633582 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.633602 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.633628 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.633647 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:50Z","lastTransitionTime":"2026-01-31T05:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.736025 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.736081 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.736099 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.736121 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.736138 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:50Z","lastTransitionTime":"2026-01-31T05:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.736343 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:44:59.425981527 +0000 UTC Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.838448 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.838504 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.838525 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.838549 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.838565 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:50Z","lastTransitionTime":"2026-01-31T05:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.941204 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.941267 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.941285 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.941309 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:50 crc kubenswrapper[5050]: I0131 05:22:50.941327 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:50Z","lastTransitionTime":"2026-01-31T05:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.044404 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.044507 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.044560 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.044584 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.044602 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:51Z","lastTransitionTime":"2026-01-31T05:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.148216 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.148329 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.148349 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.148407 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.148428 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:51Z","lastTransitionTime":"2026-01-31T05:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.252032 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.252088 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.252105 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.252128 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.252199 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:51Z","lastTransitionTime":"2026-01-31T05:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.355462 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.355531 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.355549 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.355575 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.355594 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:51Z","lastTransitionTime":"2026-01-31T05:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.458625 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.458680 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.458739 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.458769 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.458790 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:51Z","lastTransitionTime":"2026-01-31T05:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.562753 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.562824 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.562843 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.562870 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.567597 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:51Z","lastTransitionTime":"2026-01-31T05:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.670335 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.670386 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.670404 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.670429 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.670446 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:51Z","lastTransitionTime":"2026-01-31T05:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.736301 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.736385 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.736423 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:35:45.451894332 +0000 UTC Jan 31 05:22:51 crc kubenswrapper[5050]: E0131 05:22:51.736471 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.736491 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.736384 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:51 crc kubenswrapper[5050]: E0131 05:22:51.736604 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:51 crc kubenswrapper[5050]: E0131 05:22:51.736694 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:51 crc kubenswrapper[5050]: E0131 05:22:51.736837 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.773405 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.773490 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.773508 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.773531 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.773548 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:51Z","lastTransitionTime":"2026-01-31T05:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.876675 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.876727 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.876745 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.876768 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.876785 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:51Z","lastTransitionTime":"2026-01-31T05:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.981062 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.981110 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.981129 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.981150 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:51 crc kubenswrapper[5050]: I0131 05:22:51.981167 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:51Z","lastTransitionTime":"2026-01-31T05:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.083903 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.084087 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.084167 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.084201 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.084264 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:52Z","lastTransitionTime":"2026-01-31T05:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.186899 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.187123 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.187252 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.187407 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.187550 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:52Z","lastTransitionTime":"2026-01-31T05:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.294300 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.294547 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.294691 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.294832 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.295016 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:52Z","lastTransitionTime":"2026-01-31T05:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.398551 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.398615 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.398634 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.398662 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.398680 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:52Z","lastTransitionTime":"2026-01-31T05:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.502337 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.502397 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.502415 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.502442 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.502466 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:52Z","lastTransitionTime":"2026-01-31T05:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.605188 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.605241 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.605260 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.605308 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.605327 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:52Z","lastTransitionTime":"2026-01-31T05:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.708656 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.708709 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.708728 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.708752 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.708768 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:52Z","lastTransitionTime":"2026-01-31T05:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.736809 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:16:06.154774065 +0000 UTC Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.812285 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.812334 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.812352 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.812374 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.812388 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:52Z","lastTransitionTime":"2026-01-31T05:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.915831 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.915877 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.915894 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.915914 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:52 crc kubenswrapper[5050]: I0131 05:22:52.915930 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:52Z","lastTransitionTime":"2026-01-31T05:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.019710 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.019792 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.019816 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.019851 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.019878 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:53Z","lastTransitionTime":"2026-01-31T05:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.123546 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.123613 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.123631 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.123659 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.123678 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:53Z","lastTransitionTime":"2026-01-31T05:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.226983 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.227046 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.227095 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.227122 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.227139 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:53Z","lastTransitionTime":"2026-01-31T05:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.330393 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.330458 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.330479 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.330507 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.330526 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:53Z","lastTransitionTime":"2026-01-31T05:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.433645 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.433704 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.433724 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.433750 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.433769 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:53Z","lastTransitionTime":"2026-01-31T05:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.536624 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.536697 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.536717 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.536745 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.536767 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:53Z","lastTransitionTime":"2026-01-31T05:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.639133 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.639188 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.639206 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.639233 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.639250 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:53Z","lastTransitionTime":"2026-01-31T05:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.735308 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.735330 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:53 crc kubenswrapper[5050]: E0131 05:22:53.735465 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.735535 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:53 crc kubenswrapper[5050]: E0131 05:22:53.735640 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.735683 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:53 crc kubenswrapper[5050]: E0131 05:22:53.735758 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:53 crc kubenswrapper[5050]: E0131 05:22:53.736113 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.737636 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:24:30.775359544 +0000 UTC Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.741803 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.741855 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.741872 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.741894 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.741912 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:53Z","lastTransitionTime":"2026-01-31T05:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.844739 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.844787 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.844807 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.844830 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.844846 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:53Z","lastTransitionTime":"2026-01-31T05:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.948544 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.948606 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.948626 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.948653 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:53 crc kubenswrapper[5050]: I0131 05:22:53.948671 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:53Z","lastTransitionTime":"2026-01-31T05:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.057898 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.058018 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.058039 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.058116 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.058137 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.161493 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.161566 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.161585 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.161614 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.161636 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.264334 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.264413 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.264431 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.264453 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.264470 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.367484 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.367539 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.367557 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.367582 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.367599 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.387303 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.387363 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.387380 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.387403 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.387420 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: E0131 05:22:54.407501 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:54Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.412810 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.412877 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.412899 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.412926 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.412992 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.425212 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:54 crc kubenswrapper[5050]: E0131 05:22:54.425379 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:22:54 crc kubenswrapper[5050]: E0131 05:22:54.425477 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs podName:e415fe7d-85f7-4a4f-8683-ffb3a0a8096d nodeName:}" failed. No retries permitted until 2026-01-31 05:23:58.425450377 +0000 UTC m=+163.474612013 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs") pod "network-metrics-daemon-ghk5r" (UID: "e415fe7d-85f7-4a4f-8683-ffb3a0a8096d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 05:22:54 crc kubenswrapper[5050]: E0131 05:22:54.434120 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:54Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.439185 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.439227 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.439243 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.439266 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.439282 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: E0131 05:22:54.460360 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:54Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.465832 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.465900 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.465920 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.465979 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.466003 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: E0131 05:22:54.488564 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:54Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.494051 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.494116 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.494129 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.494153 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.494168 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: E0131 05:22:54.514059 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec9182ce-0cc0-426f-b3ce-57d540740844\\\",\\\"systemUUID\\\":\\\"668e546d-c46d-479d-b853-255ef6694306\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:54Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:54 crc kubenswrapper[5050]: E0131 05:22:54.514288 5050 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.516561 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.516615 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.516634 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.516668 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.516690 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.620122 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.620185 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.620203 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.620231 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.620251 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.723257 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.723315 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.723335 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.723362 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.723381 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.737809 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 18:24:04.759841434 +0000 UTC Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.826120 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.826177 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.826195 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.826220 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.826238 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.929697 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.929754 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.929771 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.929795 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:54 crc kubenswrapper[5050]: I0131 05:22:54.929813 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:54Z","lastTransitionTime":"2026-01-31T05:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.033159 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.033400 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.033544 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.033708 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.033889 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:55Z","lastTransitionTime":"2026-01-31T05:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.137047 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.137364 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.137530 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.137688 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.137821 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:55Z","lastTransitionTime":"2026-01-31T05:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.240887 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.240940 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.240970 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.240988 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.240998 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:55Z","lastTransitionTime":"2026-01-31T05:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.343804 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.343865 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.343879 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.343900 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.343912 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:55Z","lastTransitionTime":"2026-01-31T05:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.447188 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.447253 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.447271 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.447302 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.447337 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:55Z","lastTransitionTime":"2026-01-31T05:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.550522 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.550599 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.550618 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.550650 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.550671 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:55Z","lastTransitionTime":"2026-01-31T05:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.656753 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.658103 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.658156 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.658187 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.658207 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:55Z","lastTransitionTime":"2026-01-31T05:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.736143 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.736264 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.736274 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:55 crc kubenswrapper[5050]: E0131 05:22:55.737266 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:55 crc kubenswrapper[5050]: E0131 05:22:55.737396 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:55 crc kubenswrapper[5050]: E0131 05:22:55.737496 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.736097 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:55 crc kubenswrapper[5050]: E0131 05:22:55.738002 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.738070 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:57:24.5860223 +0000 UTC Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.760571 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://174847d522b0256b4dbb3222c091aed78ad18305be652d6c08bdd39cd8d58af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6e4c0f4817e87900bfd5fe3e062d109757466dc616e09ec20c5bd303c00fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.760822 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.760859 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.760877 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.760904 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.760927 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:55Z","lastTransitionTime":"2026-01-31T05:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.788125 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f6f8108-9a7b-466b-8cf5-c578bd9f447a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745706a579bb833688897b4cb2cb6737799dd17e06289dd9f86feb3157869091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6d422a3c7a1cc6368fe9dbd7e7225de23b9192bd168a1d69c0a41e96b49da53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3059317ec80e2b5df0d860679c128a09b24ebbe95d66bc1459fa82c187df7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f25e60c0d62edc8197901b572780cc273936ab314bfe86117a1b854b68dc85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://855fdac7827a338b24d314ecb77031f212d9d32cb8ac928dcfbb952517e79084\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd44be22d5d69433aad9fd2706d8b78d4d3a63c20c952a7145593f4075efffc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c5b082b6086665c2019686af30d48c694da7492a30fe14b2e63d62be172627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll5cj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5cnpw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.808630 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08749b03-1335-4fda-ad78-1b95f1509423\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbd539e3f055d0752e96e4cda1e537dd882014e4da194ccdaabed99d4e34a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08d1b1e392725f71c6af84f95a2cc3c1729395eb1f41efeced729172be7c9999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c8b0499c40d65b63fd763970a21129c1da53c1f88611ec1a7daccd9bf9943ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f52587f5db1ae826c5b87fc17114fb8dbacd4fa5eef347fa3ba49bbcd626c783\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.826433 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b4308d-ce2e-45e3-b1ba-e9379f2936f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e17846ad655c9c39b764cf8aef5df05d0f97e26aa56992971f4db04c9750ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://944d6a7d5d890068d8b0dd96e2ec28fd0cf130fde1f6092eb13176cde30a0726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://944d6a7d5d890068d8b0dd96e2ec28fd0cf130fde1f6092eb13176cde30a0726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.850849 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://745c3c72a6648f3383221c0fba52327b4560903d0f52df489ef7fed116c60678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.863301 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.863374 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.863394 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.863423 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.863442 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:55Z","lastTransitionTime":"2026-01-31T05:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.872044 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8394e6-1648-4ba8-970b-242434354d42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fc72d4cd93a2a4651e5e995717b3c872402ef127505641df728071bd90a8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2b5rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tbf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.892415 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824e777c-379f-47d8-bc4f-c8d3b0f5ad52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f7a88e9790535a684300ab4d1935e64e9609c516b8b36f792a483245f2a135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ad18ccccd727c0663940eab33b57357217b16c41f5822ef1182cce8b3dd10de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wfwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cd5w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.914419 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81eb4b11-a1e6-48e9-9c95-c03d0642eaad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 05:21:29.366615 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 05:21:29.370675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1419282514/tls.crt::/tmp/serving-cert-1419282514/tls.key\\\\\\\"\\\\nI0131 05:21:35.500810 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 05:21:35.507127 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 05:21:35.507261 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 05:21:35.507353 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 05:21:35.507419 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 05:21:35.520895 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 05:21:35.520939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 05:21:35.520981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 05:21:35.520987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 05:21:35.520995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 05:21:35.521000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 05:21:35.521154 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 05:21:35.522687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.935939 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.955926 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.966417 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.966491 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.966512 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.966540 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.966561 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:55Z","lastTransitionTime":"2026-01-31T05:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.976018 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:55 crc kubenswrapper[5050]: I0131 05:22:55.995297 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t9kbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"351a69d0-1fcc-4576-aca8-011668de66da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd51c5d16aa98637eb6118c9df2c7a120ca0c10321ef649967fca628d04eb4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jhnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t9kbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:55Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.010889 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tcp4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a3f7cf-47c2-4989-b7b6-8b5d5d02cbdf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f5fd641b0876ac44ef884dfcc1b32472b25add0004d4a6f26e186e4e84e0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppwd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tcp4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:56Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.043463 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ef5b634-fec8-410b-9bcf-fb115fe54c36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb972f9fdac10faa54b50a9219d070fa279646e9ee0e36618f77bc5dc254566c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e61a76ed8a8277321659bfeb4ba1ff0a3a8e2f2ba87f478b9a4ceb89afa59c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae13eecc86b51cc95284d3b3fc12359d2e2568ba76275c43562b99c1527b14e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe6452db8a61013ca3bda0a2d3a43003ee7151a412927d8bfe779796d2af708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e56aaf7d76d5d8e22bd63b2f543c9d69526ee0f4f704fdf93f230299d0d9f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6dcec9ec40aed9a03eac63c87fc2e15afc66ead30ede2616563482f356a508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b6dcec9ec40aed9a03eac63c87fc2e15afc66ead30ede2616563482f356a508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f84e4ecfa0bd44da2b5068a836a1f208e0f49db5d54aadf7b2d6f9a2d997ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f84e4ecfa0bd44da2b5068a836a1f208e0f49db5d54aadf7b2d6f9a2d997ed2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2ce391e182a2d1e4561d24243dcbffe1fe282bfd6559836365acdea77c40290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2ce391e182a2d1e4561d24243dcbffe1fe282bfd6559836365acdea77c40290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:56Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.064366 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708bc5f1-eae4-40b4-b64b-84a5cba35a9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57667e0c5e6f0123db58892dd3d39fdfac9c87e5ce0b657cb224ae4230fa002a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cd5479662bda97ebba8d6035e01526a419b845f4b88158c67d6d4848cd74d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8af36903c149a5ffa57d848350999f3b0b38b90a91845b50d5d7ac67de6016\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:56Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.069447 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.069489 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.069503 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.069520 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.069531 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:56Z","lastTransitionTime":"2026-01-31T05:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.085094 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d533d57d34d9e6c6497993e0bd22d929fb8bf80bd54e146fe5ddbf1549584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:56Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.102263 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgpmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeb03b23-b94b-4aaf-aac2-a04db399ec55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:24Z\\\",\\\"message\\\":\\\"2026-01-31T05:21:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f\\\\n2026-01-31T05:21:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a75854b1-09bf-4e0d-819c-1fd9d3f2942f to /host/opt/cni/bin/\\\\n2026-01-31T05:21:39Z [verbose] multus-daemon started\\\\n2026-01-31T05:21:39Z [verbose] Readiness Indicator file check\\\\n2026-01-31T05:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjh72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgpmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:56Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.127274 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T05:22:32Z\\\",\\\"message\\\":\\\".org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.250:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de88cb48-af91-44f8-b3c0-73dcf8201ba5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 05:22:32.744559 7094 ovnkube.go:599] Stopped ovnkube\\\\nI0131 05:22:32.744558 7094 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0131 05:22:32.744574 7094 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 1.137459ms\\\\nI0131 05:22:32.744587 7094 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 05:22:32.744694 7094 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T05:22:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T05:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T05:21:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T05:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lwcbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8hx4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:56Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.140001 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T05:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqkjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T05:21:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ghk5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T05:22:56Z is after 2025-08-24T17:21:41Z" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.173082 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.173356 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.173593 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.173837 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.174083 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:56Z","lastTransitionTime":"2026-01-31T05:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.278354 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.278611 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.278892 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.279111 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.279324 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:56Z","lastTransitionTime":"2026-01-31T05:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.383180 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.383427 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.383504 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.383612 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.383700 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:56Z","lastTransitionTime":"2026-01-31T05:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.487407 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.487485 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.487509 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.487537 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.487561 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:56Z","lastTransitionTime":"2026-01-31T05:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.590435 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.590500 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.590519 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.590550 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.590568 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:56Z","lastTransitionTime":"2026-01-31T05:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.693124 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.693179 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.693195 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.693220 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.693239 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:56Z","lastTransitionTime":"2026-01-31T05:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.739090 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:04:09.739310834 +0000 UTC Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.796047 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.796294 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.796428 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.796577 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.796770 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:56Z","lastTransitionTime":"2026-01-31T05:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.900397 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.900460 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.900477 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.900500 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:56 crc kubenswrapper[5050]: I0131 05:22:56.900517 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:56Z","lastTransitionTime":"2026-01-31T05:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.003768 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.003816 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.003833 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.003855 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.003873 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:57Z","lastTransitionTime":"2026-01-31T05:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.106909 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.106986 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.107003 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.107029 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.107048 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:57Z","lastTransitionTime":"2026-01-31T05:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.209619 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.209880 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.210119 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.210481 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.210796 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:57Z","lastTransitionTime":"2026-01-31T05:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.314989 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.315380 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.315586 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.315780 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.316025 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:57Z","lastTransitionTime":"2026-01-31T05:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.419547 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.419606 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.419627 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.419653 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.419671 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:57Z","lastTransitionTime":"2026-01-31T05:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.522500 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.522567 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.522587 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.522617 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.522639 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:57Z","lastTransitionTime":"2026-01-31T05:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.626695 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.626766 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.626787 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.626816 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.626835 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:57Z","lastTransitionTime":"2026-01-31T05:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.729693 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.729767 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.729788 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.729818 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.729841 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:57Z","lastTransitionTime":"2026-01-31T05:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.737610 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:57 crc kubenswrapper[5050]: E0131 05:22:57.737753 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.737824 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:57 crc kubenswrapper[5050]: E0131 05:22:57.737888 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.738182 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:57 crc kubenswrapper[5050]: E0131 05:22:57.738318 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.738590 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:57 crc kubenswrapper[5050]: E0131 05:22:57.738663 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.739496 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:30:37.985283318 +0000 UTC Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.833450 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.833514 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.833532 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.833561 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.833581 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:57Z","lastTransitionTime":"2026-01-31T05:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.936715 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.936767 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.936784 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.936809 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:57 crc kubenswrapper[5050]: I0131 05:22:57.936856 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:57Z","lastTransitionTime":"2026-01-31T05:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.039581 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.039637 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.039649 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.039668 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.039680 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:58Z","lastTransitionTime":"2026-01-31T05:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.143406 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.143479 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.143496 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.143522 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.143541 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:58Z","lastTransitionTime":"2026-01-31T05:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.246291 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.246553 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.246581 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.246616 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.246635 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:58Z","lastTransitionTime":"2026-01-31T05:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.349674 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.349729 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.349746 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.349770 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.349786 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:58Z","lastTransitionTime":"2026-01-31T05:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.453563 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.453630 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.453648 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.453676 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.453694 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:58Z","lastTransitionTime":"2026-01-31T05:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.557425 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.557496 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.557512 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.557540 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.557562 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:58Z","lastTransitionTime":"2026-01-31T05:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.661567 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.661642 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.661661 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.661691 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.661714 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:58Z","lastTransitionTime":"2026-01-31T05:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.739903 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:18:45.789417693 +0000 UTC Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.764398 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.764464 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.764474 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.764491 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.764503 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:58Z","lastTransitionTime":"2026-01-31T05:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.867398 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.867476 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.867499 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.867527 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.867548 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:58Z","lastTransitionTime":"2026-01-31T05:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.970652 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.970708 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.970727 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.970757 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:58 crc kubenswrapper[5050]: I0131 05:22:58.970778 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:58Z","lastTransitionTime":"2026-01-31T05:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.074590 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.074670 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.074693 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.074725 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.074745 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:59Z","lastTransitionTime":"2026-01-31T05:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.178079 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.178142 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.178162 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.178194 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.178213 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:59Z","lastTransitionTime":"2026-01-31T05:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.281527 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.281584 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.281603 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.281633 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.281653 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:59Z","lastTransitionTime":"2026-01-31T05:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.385761 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.385825 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.385848 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.385880 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.385901 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:59Z","lastTransitionTime":"2026-01-31T05:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.488590 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.488655 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.488675 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.488702 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.488720 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:59Z","lastTransitionTime":"2026-01-31T05:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.591076 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.591125 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.591141 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.591165 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.591186 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:59Z","lastTransitionTime":"2026-01-31T05:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.694508 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.694577 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.694594 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.694622 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.694639 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:59Z","lastTransitionTime":"2026-01-31T05:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.735852 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.735892 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.735895 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:22:59 crc kubenswrapper[5050]: E0131 05:22:59.736102 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.736153 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:22:59 crc kubenswrapper[5050]: E0131 05:22:59.736301 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:22:59 crc kubenswrapper[5050]: E0131 05:22:59.736865 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:22:59 crc kubenswrapper[5050]: E0131 05:22:59.737094 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.737358 5050 scope.go:117] "RemoveContainer" containerID="85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284" Jan 31 05:22:59 crc kubenswrapper[5050]: E0131 05:22:59.737605 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8hx4t_openshift-ovn-kubernetes(7d29ecd7-304b-4356-9f7c-c4d8d4ee809e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.740521 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:25:47.725498706 +0000 UTC Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.797182 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.797242 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.797261 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.797286 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.797304 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:59Z","lastTransitionTime":"2026-01-31T05:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.901246 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.901338 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.901362 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.901392 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:22:59 crc kubenswrapper[5050]: I0131 05:22:59.901420 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:22:59Z","lastTransitionTime":"2026-01-31T05:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.004206 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.004273 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.004297 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.004328 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.004350 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:00Z","lastTransitionTime":"2026-01-31T05:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.106797 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.106853 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.106870 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.106893 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.106911 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:00Z","lastTransitionTime":"2026-01-31T05:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.210009 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.210074 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.210092 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.210116 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.210134 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:00Z","lastTransitionTime":"2026-01-31T05:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.312781 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.312844 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.312859 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.312881 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.312896 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:00Z","lastTransitionTime":"2026-01-31T05:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.416138 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.416193 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.416213 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.416236 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.416253 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:00Z","lastTransitionTime":"2026-01-31T05:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.519278 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.519319 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.519327 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.519341 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.519351 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:00Z","lastTransitionTime":"2026-01-31T05:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.622502 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.622573 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.622587 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.622610 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.622626 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:00Z","lastTransitionTime":"2026-01-31T05:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.726360 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.726428 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.726444 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.726471 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.726493 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:00Z","lastTransitionTime":"2026-01-31T05:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.742076 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:37:16.035661357 +0000 UTC Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.829379 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.829439 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.829459 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.829486 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.829509 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:00Z","lastTransitionTime":"2026-01-31T05:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.932466 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.932536 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.932554 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.932581 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:00 crc kubenswrapper[5050]: I0131 05:23:00.932605 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:00Z","lastTransitionTime":"2026-01-31T05:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.036359 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.036432 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.036451 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.036483 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.036501 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:01Z","lastTransitionTime":"2026-01-31T05:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.139936 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.140023 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.140043 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.140069 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.140087 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:01Z","lastTransitionTime":"2026-01-31T05:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.242225 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.242279 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.242296 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.242319 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.242340 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:01Z","lastTransitionTime":"2026-01-31T05:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.345559 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.345616 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.345633 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.345658 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.345676 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:01Z","lastTransitionTime":"2026-01-31T05:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.448555 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.448616 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.448634 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.448659 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.448675 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:01Z","lastTransitionTime":"2026-01-31T05:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.552705 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.552775 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.552794 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.552821 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.552840 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:01Z","lastTransitionTime":"2026-01-31T05:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.655857 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.655903 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.655920 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.655941 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.655993 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:01Z","lastTransitionTime":"2026-01-31T05:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.735488 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.735559 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.735559 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:01 crc kubenswrapper[5050]: E0131 05:23:01.735641 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:01 crc kubenswrapper[5050]: E0131 05:23:01.735799 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.735815 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:01 crc kubenswrapper[5050]: E0131 05:23:01.735907 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:01 crc kubenswrapper[5050]: E0131 05:23:01.736088 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.742288 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:11:29.22271453 +0000 UTC Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.758428 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.758622 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.758800 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.758941 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.759122 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:01Z","lastTransitionTime":"2026-01-31T05:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.862008 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.862065 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.862084 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.862119 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.862140 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:01Z","lastTransitionTime":"2026-01-31T05:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.965407 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.965468 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.965489 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.965515 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:01 crc kubenswrapper[5050]: I0131 05:23:01.965533 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:01Z","lastTransitionTime":"2026-01-31T05:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.068270 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.068684 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.068839 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.069032 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.069182 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:02Z","lastTransitionTime":"2026-01-31T05:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.172645 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.172710 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.172729 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.172757 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.172782 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:02Z","lastTransitionTime":"2026-01-31T05:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.276099 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.276156 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.276174 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.276198 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.276215 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:02Z","lastTransitionTime":"2026-01-31T05:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.379402 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.379838 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.380157 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.380323 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.380474 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:02Z","lastTransitionTime":"2026-01-31T05:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.483851 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.483908 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.483926 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.483983 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.484001 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:02Z","lastTransitionTime":"2026-01-31T05:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.587334 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.587383 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.587400 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.587424 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.587441 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:02Z","lastTransitionTime":"2026-01-31T05:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.690410 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.690904 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.691161 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.691306 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.691452 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:02Z","lastTransitionTime":"2026-01-31T05:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.743206 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:21:36.459882577 +0000 UTC Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.794055 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.794106 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.794124 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.794148 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.794167 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:02Z","lastTransitionTime":"2026-01-31T05:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.896556 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.896830 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.896920 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.897059 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:02 crc kubenswrapper[5050]: I0131 05:23:02.897169 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:02Z","lastTransitionTime":"2026-01-31T05:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:02.999970 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.000219 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.000313 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.000417 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.000510 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:03Z","lastTransitionTime":"2026-01-31T05:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.104203 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.104279 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.104300 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.104332 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.104352 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:03Z","lastTransitionTime":"2026-01-31T05:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.206533 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.206782 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.206873 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.206986 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.207068 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:03Z","lastTransitionTime":"2026-01-31T05:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.310122 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.310169 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.310186 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.310211 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.310228 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:03Z","lastTransitionTime":"2026-01-31T05:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.413248 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.413923 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.414176 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.414378 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.414547 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:03Z","lastTransitionTime":"2026-01-31T05:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.518648 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.519081 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.519238 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.519378 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.519524 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:03Z","lastTransitionTime":"2026-01-31T05:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.622675 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.623220 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.623403 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.623567 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.623695 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:03Z","lastTransitionTime":"2026-01-31T05:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.727008 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.727073 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.727090 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.727120 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.727139 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:03Z","lastTransitionTime":"2026-01-31T05:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.735850 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.736029 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:03 crc kubenswrapper[5050]: E0131 05:23:03.736140 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:03 crc kubenswrapper[5050]: E0131 05:23:03.736352 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.736558 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:03 crc kubenswrapper[5050]: E0131 05:23:03.736698 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.737433 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:03 crc kubenswrapper[5050]: E0131 05:23:03.738046 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.743677 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:45:40.903341253 +0000 UTC Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.834913 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.835010 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.835036 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.835082 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.835106 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:03Z","lastTransitionTime":"2026-01-31T05:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.938986 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.939057 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.939081 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.939109 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:03 crc kubenswrapper[5050]: I0131 05:23:03.939128 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:03Z","lastTransitionTime":"2026-01-31T05:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.043197 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.043596 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.043701 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.043811 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.043919 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:04Z","lastTransitionTime":"2026-01-31T05:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.147252 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.147323 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.147342 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.147368 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.147392 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:04Z","lastTransitionTime":"2026-01-31T05:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.251733 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.251812 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.251832 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.251868 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.251892 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:04Z","lastTransitionTime":"2026-01-31T05:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.354813 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.354882 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.354901 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.354936 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.354983 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:04Z","lastTransitionTime":"2026-01-31T05:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.457279 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.457335 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.457354 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.457374 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.457391 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:04Z","lastTransitionTime":"2026-01-31T05:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.560286 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.560364 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.560387 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.560420 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.560441 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:04Z","lastTransitionTime":"2026-01-31T05:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.664080 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.664148 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.664166 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.664194 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.664214 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:04Z","lastTransitionTime":"2026-01-31T05:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.745134 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 22:17:49.132938123 +0000 UTC Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.766561 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.766619 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.766638 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.766663 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.766684 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:04Z","lastTransitionTime":"2026-01-31T05:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.844265 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.844336 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.844363 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.844398 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.844419 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T05:23:04Z","lastTransitionTime":"2026-01-31T05:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.942992 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv"] Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.943518 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.946623 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.946768 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.946927 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.947775 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.987589 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.987551182 podStartE2EDuration="1m1.987551182s" podCreationTimestamp="2026-01-31 05:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:04.972501372 +0000 UTC m=+110.021663028" watchObservedRunningTime="2026-01-31 05:23:04.987551182 +0000 UTC m=+110.036712818" Jan 31 05:23:04 crc kubenswrapper[5050]: I0131 05:23:04.988084 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.988068806 podStartE2EDuration="17.988068806s" podCreationTimestamp="2026-01-31 05:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:04.987778768 +0000 UTC m=+110.036940404" watchObservedRunningTime="2026-01-31 05:23:04.988068806 +0000 UTC m=+110.037230442" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.038394 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-t9kbs" podStartSLOduration=90.038358809 podStartE2EDuration="1m30.038358809s" podCreationTimestamp="2026-01-31 05:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:05.038334388 +0000 UTC m=+110.087496004" watchObservedRunningTime="2026-01-31 05:23:05.038358809 +0000 UTC m=+110.087520445" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.038555 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podStartSLOduration=90.038543353 podStartE2EDuration="1m30.038543353s" podCreationTimestamp="2026-01-31 05:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:05.023201825 +0000 UTC m=+110.072363461" watchObservedRunningTime="2026-01-31 05:23:05.038543353 +0000 UTC m=+110.087704989" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.050210 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ed135c3-c9d0-4f6e-ab31-068313f52c38-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.050278 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ed135c3-c9d0-4f6e-ab31-068313f52c38-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.050325 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ed135c3-c9d0-4f6e-ab31-068313f52c38-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.050582 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ed135c3-c9d0-4f6e-ab31-068313f52c38-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.050801 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ed135c3-c9d0-4f6e-ab31-068313f52c38-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.053416 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tcp4l" podStartSLOduration=90.053394999 podStartE2EDuration="1m30.053394999s" podCreationTimestamp="2026-01-31 05:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:05.052711902 +0000 UTC m=+110.101873508" watchObservedRunningTime="2026-01-31 05:23:05.053394999 +0000 UTC m=+110.102556595" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.090405 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cd5w6" podStartSLOduration=89.090370535 podStartE2EDuration="1m29.090370535s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:05.070080081 +0000 UTC m=+110.119241707" watchObservedRunningTime="2026-01-31 05:23:05.090370535 +0000 UTC m=+110.139532151" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.109523 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.109498749 podStartE2EDuration="1m30.109498749s" podCreationTimestamp="2026-01-31 05:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:05.090116709 +0000 UTC m=+110.139278345" watchObservedRunningTime="2026-01-31 05:23:05.109498749 +0000 UTC m=+110.158660355" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.151810 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ed135c3-c9d0-4f6e-ab31-068313f52c38-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.151896 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ed135c3-c9d0-4f6e-ab31-068313f52c38-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.151940 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ed135c3-c9d0-4f6e-ab31-068313f52c38-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.152000 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ed135c3-c9d0-4f6e-ab31-068313f52c38-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.152071 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ed135c3-c9d0-4f6e-ab31-068313f52c38-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.152113 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ed135c3-c9d0-4f6e-ab31-068313f52c38-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.152062 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ed135c3-c9d0-4f6e-ab31-068313f52c38-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.153374 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ed135c3-c9d0-4f6e-ab31-068313f52c38-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.164006 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ed135c3-c9d0-4f6e-ab31-068313f52c38-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.172207 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ed135c3-c9d0-4f6e-ab31-068313f52c38-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fmqjv\" (UID: \"8ed135c3-c9d0-4f6e-ab31-068313f52c38\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.233013 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=22.232989845 podStartE2EDuration="22.232989845s" podCreationTimestamp="2026-01-31 05:22:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:05.232856262 +0000 UTC m=+110.282017868" watchObservedRunningTime="2026-01-31 05:23:05.232989845 +0000 UTC m=+110.282151481" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.246345 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.246325392 podStartE2EDuration="1m29.246325392s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:05.246287542 +0000 UTC m=+110.295449158" watchObservedRunningTime="2026-01-31 05:23:05.246325392 +0000 UTC m=+110.295486988" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.268440 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" Jan 31 05:23:05 crc kubenswrapper[5050]: W0131 05:23:05.283337 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ed135c3_c9d0_4f6e_ab31_068313f52c38.slice/crio-276d82e9cfbc1ea879995a4a557ca1f7a0755a8a6906bd2257be818047880ab0 WatchSource:0}: Error finding container 276d82e9cfbc1ea879995a4a557ca1f7a0755a8a6906bd2257be818047880ab0: Status 404 returned error can't find the container with id 276d82e9cfbc1ea879995a4a557ca1f7a0755a8a6906bd2257be818047880ab0 Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.297101 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-tgpmd" podStartSLOduration=89.297076017 podStartE2EDuration="1m29.297076017s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:05.281285697 +0000 UTC m=+110.330447313" watchObservedRunningTime="2026-01-31 05:23:05.297076017 +0000 UTC m=+110.346237643" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.316558 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5cnpw" podStartSLOduration=89.31653654 podStartE2EDuration="1m29.31653654s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:05.315252897 +0000 UTC m=+110.364414513" watchObservedRunningTime="2026-01-31 05:23:05.31653654 +0000 UTC m=+110.365698156" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.461369 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" event={"ID":"8ed135c3-c9d0-4f6e-ab31-068313f52c38","Type":"ContainerStarted","Data":"4b6d64c79d4046f241982cb9cf140d8a78d985556f54238502eb54e4ca08f43f"} Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.461433 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" event={"ID":"8ed135c3-c9d0-4f6e-ab31-068313f52c38","Type":"ContainerStarted","Data":"276d82e9cfbc1ea879995a4a557ca1f7a0755a8a6906bd2257be818047880ab0"} Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.735316 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.735373 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.735453 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:05 crc kubenswrapper[5050]: E0131 05:23:05.737092 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.737153 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:05 crc kubenswrapper[5050]: E0131 05:23:05.737347 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:05 crc kubenswrapper[5050]: E0131 05:23:05.737476 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:05 crc kubenswrapper[5050]: E0131 05:23:05.737659 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.746002 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:26:33.264511523 +0000 UTC Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.746076 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 05:23:05 crc kubenswrapper[5050]: I0131 05:23:05.757390 5050 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 05:23:07 crc kubenswrapper[5050]: I0131 05:23:07.736086 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:07 crc kubenswrapper[5050]: I0131 05:23:07.736460 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:07 crc kubenswrapper[5050]: I0131 05:23:07.736248 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:07 crc kubenswrapper[5050]: I0131 05:23:07.736594 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:07 crc kubenswrapper[5050]: E0131 05:23:07.736814 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:07 crc kubenswrapper[5050]: E0131 05:23:07.737022 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:07 crc kubenswrapper[5050]: E0131 05:23:07.737197 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:07 crc kubenswrapper[5050]: E0131 05:23:07.737359 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:09 crc kubenswrapper[5050]: I0131 05:23:09.736280 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:09 crc kubenswrapper[5050]: I0131 05:23:09.736349 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:09 crc kubenswrapper[5050]: I0131 05:23:09.736281 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:09 crc kubenswrapper[5050]: E0131 05:23:09.736571 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:09 crc kubenswrapper[5050]: E0131 05:23:09.736458 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:09 crc kubenswrapper[5050]: E0131 05:23:09.736737 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:09 crc kubenswrapper[5050]: I0131 05:23:09.737176 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:09 crc kubenswrapper[5050]: E0131 05:23:09.737364 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:10 crc kubenswrapper[5050]: I0131 05:23:10.495002 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgpmd_eeb03b23-b94b-4aaf-aac2-a04db399ec55/kube-multus/1.log" Jan 31 05:23:10 crc kubenswrapper[5050]: I0131 05:23:10.496066 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgpmd_eeb03b23-b94b-4aaf-aac2-a04db399ec55/kube-multus/0.log" Jan 31 05:23:10 crc kubenswrapper[5050]: I0131 05:23:10.496293 5050 generic.go:334] "Generic (PLEG): container finished" podID="eeb03b23-b94b-4aaf-aac2-a04db399ec55" containerID="bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966" exitCode=1 Jan 31 05:23:10 crc kubenswrapper[5050]: I0131 05:23:10.496416 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgpmd" event={"ID":"eeb03b23-b94b-4aaf-aac2-a04db399ec55","Type":"ContainerDied","Data":"bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966"} Jan 31 05:23:10 crc kubenswrapper[5050]: I0131 05:23:10.496507 5050 scope.go:117] "RemoveContainer" containerID="b424b46cb8f79dff63e3505d3e9556f188c5c55bcf2a19166c1bd23f60b3c2f2" Jan 31 05:23:10 crc kubenswrapper[5050]: I0131 05:23:10.497417 5050 scope.go:117] "RemoveContainer" containerID="bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966" Jan 31 05:23:10 crc kubenswrapper[5050]: E0131 05:23:10.497738 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-tgpmd_openshift-multus(eeb03b23-b94b-4aaf-aac2-a04db399ec55)\"" pod="openshift-multus/multus-tgpmd" podUID="eeb03b23-b94b-4aaf-aac2-a04db399ec55" Jan 31 05:23:10 crc kubenswrapper[5050]: I0131 05:23:10.517851 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fmqjv" podStartSLOduration=95.517826277 podStartE2EDuration="1m35.517826277s" podCreationTimestamp="2026-01-31 05:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:05.481639139 +0000 UTC m=+110.530800745" watchObservedRunningTime="2026-01-31 05:23:10.517826277 +0000 UTC m=+115.566987913" Jan 31 05:23:11 crc kubenswrapper[5050]: I0131 05:23:11.502496 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgpmd_eeb03b23-b94b-4aaf-aac2-a04db399ec55/kube-multus/1.log" Jan 31 05:23:11 crc kubenswrapper[5050]: I0131 05:23:11.735895 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:11 crc kubenswrapper[5050]: I0131 05:23:11.736023 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:11 crc kubenswrapper[5050]: E0131 05:23:11.736122 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:11 crc kubenswrapper[5050]: I0131 05:23:11.735922 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:11 crc kubenswrapper[5050]: E0131 05:23:11.736301 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:11 crc kubenswrapper[5050]: I0131 05:23:11.736248 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:11 crc kubenswrapper[5050]: E0131 05:23:11.736440 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:11 crc kubenswrapper[5050]: E0131 05:23:11.736540 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:13 crc kubenswrapper[5050]: I0131 05:23:13.735771 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:13 crc kubenswrapper[5050]: I0131 05:23:13.735850 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:13 crc kubenswrapper[5050]: E0131 05:23:13.735986 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:13 crc kubenswrapper[5050]: I0131 05:23:13.736241 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:13 crc kubenswrapper[5050]: I0131 05:23:13.736286 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:13 crc kubenswrapper[5050]: E0131 05:23:13.736373 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:13 crc kubenswrapper[5050]: E0131 05:23:13.736601 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:13 crc kubenswrapper[5050]: E0131 05:23:13.736805 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:14 crc kubenswrapper[5050]: I0131 05:23:14.737627 5050 scope.go:117] "RemoveContainer" containerID="85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284" Jan 31 05:23:15 crc kubenswrapper[5050]: I0131 05:23:15.518707 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/3.log" Jan 31 05:23:15 crc kubenswrapper[5050]: I0131 05:23:15.522512 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerStarted","Data":"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a"} Jan 31 05:23:15 crc kubenswrapper[5050]: I0131 05:23:15.523174 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:23:15 crc kubenswrapper[5050]: I0131 05:23:15.555772 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podStartSLOduration=99.555712329 podStartE2EDuration="1m39.555712329s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:15.554869638 +0000 UTC m=+120.604031284" watchObservedRunningTime="2026-01-31 05:23:15.555712329 +0000 UTC m=+120.604873975" Jan 31 05:23:15 crc kubenswrapper[5050]: I0131 05:23:15.684296 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ghk5r"] Jan 31 05:23:15 crc kubenswrapper[5050]: I0131 05:23:15.684470 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:15 crc kubenswrapper[5050]: E0131 05:23:15.684633 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:15 crc kubenswrapper[5050]: I0131 05:23:15.735249 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:15 crc kubenswrapper[5050]: I0131 05:23:15.735274 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:15 crc kubenswrapper[5050]: E0131 05:23:15.735390 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:15 crc kubenswrapper[5050]: E0131 05:23:15.735540 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:15 crc kubenswrapper[5050]: I0131 05:23:15.736019 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:15 crc kubenswrapper[5050]: E0131 05:23:15.736433 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:15 crc kubenswrapper[5050]: E0131 05:23:15.749219 5050 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 05:23:15 crc kubenswrapper[5050]: E0131 05:23:15.848409 5050 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 05:23:17 crc kubenswrapper[5050]: I0131 05:23:17.736168 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:17 crc kubenswrapper[5050]: I0131 05:23:17.736238 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:17 crc kubenswrapper[5050]: I0131 05:23:17.736265 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:17 crc kubenswrapper[5050]: I0131 05:23:17.736191 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:17 crc kubenswrapper[5050]: E0131 05:23:17.736459 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:17 crc kubenswrapper[5050]: E0131 05:23:17.736582 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:17 crc kubenswrapper[5050]: E0131 05:23:17.736699 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:17 crc kubenswrapper[5050]: E0131 05:23:17.736853 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:19 crc kubenswrapper[5050]: I0131 05:23:19.735328 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:19 crc kubenswrapper[5050]: I0131 05:23:19.735370 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:19 crc kubenswrapper[5050]: I0131 05:23:19.735519 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:19 crc kubenswrapper[5050]: I0131 05:23:19.735554 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:19 crc kubenswrapper[5050]: E0131 05:23:19.737540 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:19 crc kubenswrapper[5050]: E0131 05:23:19.737697 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:19 crc kubenswrapper[5050]: E0131 05:23:19.737898 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:19 crc kubenswrapper[5050]: E0131 05:23:19.738085 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:20 crc kubenswrapper[5050]: E0131 05:23:20.850340 5050 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 05:23:21 crc kubenswrapper[5050]: I0131 05:23:21.736021 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:21 crc kubenswrapper[5050]: I0131 05:23:21.736052 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:21 crc kubenswrapper[5050]: E0131 05:23:21.736294 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:21 crc kubenswrapper[5050]: E0131 05:23:21.736459 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:21 crc kubenswrapper[5050]: I0131 05:23:21.736474 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:21 crc kubenswrapper[5050]: I0131 05:23:21.736583 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:21 crc kubenswrapper[5050]: E0131 05:23:21.736674 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:21 crc kubenswrapper[5050]: E0131 05:23:21.736822 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:22 crc kubenswrapper[5050]: I0131 05:23:22.736552 5050 scope.go:117] "RemoveContainer" containerID="bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966" Jan 31 05:23:23 crc kubenswrapper[5050]: I0131 05:23:23.557618 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgpmd_eeb03b23-b94b-4aaf-aac2-a04db399ec55/kube-multus/1.log" Jan 31 05:23:23 crc kubenswrapper[5050]: I0131 05:23:23.558115 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgpmd" event={"ID":"eeb03b23-b94b-4aaf-aac2-a04db399ec55","Type":"ContainerStarted","Data":"ac8fc87d22a662d586d590e706ecab572ece682431bb937e264475a7f7d39130"} Jan 31 05:23:23 crc kubenswrapper[5050]: I0131 05:23:23.735915 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:23 crc kubenswrapper[5050]: E0131 05:23:23.736113 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:23 crc kubenswrapper[5050]: I0131 05:23:23.736546 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:23 crc kubenswrapper[5050]: E0131 05:23:23.736636 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:23 crc kubenswrapper[5050]: I0131 05:23:23.736784 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:23 crc kubenswrapper[5050]: E0131 05:23:23.736845 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:23 crc kubenswrapper[5050]: I0131 05:23:23.737009 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:23 crc kubenswrapper[5050]: E0131 05:23:23.737073 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:25 crc kubenswrapper[5050]: I0131 05:23:25.735813 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:25 crc kubenswrapper[5050]: E0131 05:23:25.744413 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 05:23:25 crc kubenswrapper[5050]: I0131 05:23:25.744460 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:25 crc kubenswrapper[5050]: I0131 05:23:25.744506 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:25 crc kubenswrapper[5050]: I0131 05:23:25.744569 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:25 crc kubenswrapper[5050]: E0131 05:23:25.744664 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ghk5r" podUID="e415fe7d-85f7-4a4f-8683-ffb3a0a8096d" Jan 31 05:23:25 crc kubenswrapper[5050]: E0131 05:23:25.744849 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 05:23:25 crc kubenswrapper[5050]: E0131 05:23:25.745297 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 05:23:27 crc kubenswrapper[5050]: I0131 05:23:27.735679 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:27 crc kubenswrapper[5050]: I0131 05:23:27.735706 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:27 crc kubenswrapper[5050]: I0131 05:23:27.735786 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:27 crc kubenswrapper[5050]: I0131 05:23:27.735817 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:27 crc kubenswrapper[5050]: I0131 05:23:27.739078 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 05:23:27 crc kubenswrapper[5050]: I0131 05:23:27.739267 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 05:23:27 crc kubenswrapper[5050]: I0131 05:23:27.739431 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 05:23:27 crc kubenswrapper[5050]: I0131 05:23:27.739580 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 05:23:27 crc kubenswrapper[5050]: I0131 05:23:27.739738 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 05:23:27 crc kubenswrapper[5050]: I0131 05:23:27.741669 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 05:23:33 crc kubenswrapper[5050]: I0131 05:23:33.790699 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.449873 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.506014 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.506896 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.507599 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.507931 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bgqwp"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.508275 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.519149 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.524918 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.525144 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.525240 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.525266 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.525436 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.525626 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tkrtm"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.526141 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.526343 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.530060 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.531213 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.531373 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2bmrg"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.531880 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2bmrg" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.532418 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.532462 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.532647 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.532695 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.532832 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.533138 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.534098 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.534566 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fk4vq"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.534942 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.535229 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.536180 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.538042 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h7wkt"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.538891 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.539173 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.539835 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.539866 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.540675 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.547028 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lkjld"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.547977 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v7nml"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.548767 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.549422 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.555903 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.561690 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.562473 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.565330 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ck76z"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.566063 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.569652 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.569989 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.570167 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.570316 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.570663 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.570909 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.571108 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.571246 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.584144 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.586075 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.586482 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.586740 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.587805 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.588073 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.589003 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.589325 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.590381 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.602803 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.612816 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.615110 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.616515 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ln492"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.617060 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.616520 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.617479 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-serving-cert\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.617554 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef45bcb-8e16-4f2b-95ce-0363efb53d7f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-65vlt\" (UID: \"cef45bcb-8e16-4f2b-95ce-0363efb53d7f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.617622 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-trusted-ca-bundle\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.617687 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91e87770-5e80-48f8-b274-31b0399b9935-audit-dir\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.629270 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-audit\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.622058 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.629363 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jwgb\" (UniqueName: \"kubernetes.io/projected/41702b15-de0c-4d6d-8096-4a86ab88d33d-kube-api-access-5jwgb\") pod \"machine-approver-56656f9798-vc8t7\" (UID: \"41702b15-de0c-4d6d-8096-4a86ab88d33d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.629529 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/91e87770-5e80-48f8-b274-31b0399b9935-node-pullsecrets\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.629632 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/83e6fe13-8779-4d8b-998e-75f7b39ea426-audit-dir\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.629709 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91e87770-5e80-48f8-b274-31b0399b9935-serving-cert\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.621946 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.622227 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.622273 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.622417 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.622734 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.630607 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.622786 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.622828 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.623422 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.623506 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.626393 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.629796 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/91e87770-5e80-48f8-b274-31b0399b9935-encryption-config\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.631704 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-image-import-ca\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.631731 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nm9j\" (UniqueName: \"kubernetes.io/projected/066f98b0-80a0-4cdd-ada3-76a1ebab23de-kube-api-access-5nm9j\") pod \"downloads-7954f5f757-2bmrg\" (UID: \"066f98b0-80a0-4cdd-ada3-76a1ebab23de\") " pod="openshift-console/downloads-7954f5f757-2bmrg" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.631759 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-oauth-serving-cert\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.631791 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e1c8049f-1b60-4e5c-a547-df42a78a841e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h7wkt\" (UID: \"e1c8049f-1b60-4e5c-a547-df42a78a841e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.631816 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqm8k\" (UniqueName: \"kubernetes.io/projected/e4380fc4-40ae-4321-bd83-5dce3d68fbae-kube-api-access-fqm8k\") pod \"openshift-config-operator-7777fb866f-lkjld\" (UID: \"e4380fc4-40ae-4321-bd83-5dce3d68fbae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.631921 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68tnw\" (UniqueName: \"kubernetes.io/projected/91e87770-5e80-48f8-b274-31b0399b9935-kube-api-access-68tnw\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632114 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/83e6fe13-8779-4d8b-998e-75f7b39ea426-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632241 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-oauth-config\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632276 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkt76\" (UniqueName: \"kubernetes.io/projected/dab2d02c-8e81-40c5-a5ca-98be1833702e-kube-api-access-kkt76\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632298 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59157317-ce37-4d74-b7b5-6495704e3571-config\") pod \"console-operator-58897d9998-tkrtm\" (UID: \"59157317-ce37-4d74-b7b5-6495704e3571\") " pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632340 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41702b15-de0c-4d6d-8096-4a86ab88d33d-config\") pod \"machine-approver-56656f9798-vc8t7\" (UID: \"41702b15-de0c-4d6d-8096-4a86ab88d33d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632368 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tmll\" (UniqueName: \"kubernetes.io/projected/85a5692d-28e5-45cd-85db-ba1dcef92b58-kube-api-access-2tmll\") pod \"route-controller-manager-6576b87f9c-vszrj\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632392 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a5692d-28e5-45cd-85db-ba1dcef92b58-serving-cert\") pod \"route-controller-manager-6576b87f9c-vszrj\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632427 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1c8049f-1b60-4e5c-a547-df42a78a841e-config\") pod \"machine-api-operator-5694c8668f-h7wkt\" (UID: \"e1c8049f-1b60-4e5c-a547-df42a78a841e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632447 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e4380fc4-40ae-4321-bd83-5dce3d68fbae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lkjld\" (UID: \"e4380fc4-40ae-4321-bd83-5dce3d68fbae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632473 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq5w7\" (UniqueName: \"kubernetes.io/projected/312df477-54e5-4ebc-bde0-ec291393ece9-kube-api-access-xq5w7\") pod \"cluster-samples-operator-665b6dd947-p7l54\" (UID: \"312df477-54e5-4ebc-bde0-ec291393ece9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632495 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk9kz\" (UniqueName: \"kubernetes.io/projected/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-kube-api-access-mk9kz\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632518 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/41702b15-de0c-4d6d-8096-4a86ab88d33d-auth-proxy-config\") pod \"machine-approver-56656f9798-vc8t7\" (UID: \"41702b15-de0c-4d6d-8096-4a86ab88d33d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632538 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/312df477-54e5-4ebc-bde0-ec291393ece9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p7l54\" (UID: \"312df477-54e5-4ebc-bde0-ec291393ece9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632562 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83e6fe13-8779-4d8b-998e-75f7b39ea426-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632586 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83e6fe13-8779-4d8b-998e-75f7b39ea426-serving-cert\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632606 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef45bcb-8e16-4f2b-95ce-0363efb53d7f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-65vlt\" (UID: \"cef45bcb-8e16-4f2b-95ce-0363efb53d7f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632626 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-config\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632646 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-service-ca-bundle\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632675 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85a5692d-28e5-45cd-85db-ba1dcef92b58-client-ca\") pod \"route-controller-manager-6576b87f9c-vszrj\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632696 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632716 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/83e6fe13-8779-4d8b-998e-75f7b39ea426-etcd-client\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632741 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/83e6fe13-8779-4d8b-998e-75f7b39ea426-audit-policies\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632762 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-config\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632795 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp72x\" (UniqueName: \"kubernetes.io/projected/83e6fe13-8779-4d8b-998e-75f7b39ea426-kube-api-access-bp72x\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632815 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/91e87770-5e80-48f8-b274-31b0399b9935-etcd-client\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632836 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-etcd-serving-ca\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632914 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/41702b15-de0c-4d6d-8096-4a86ab88d33d-machine-approver-tls\") pod \"machine-approver-56656f9798-vc8t7\" (UID: \"41702b15-de0c-4d6d-8096-4a86ab88d33d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632942 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59157317-ce37-4d74-b7b5-6495704e3571-trusted-ca\") pod \"console-operator-58897d9998-tkrtm\" (UID: \"59157317-ce37-4d74-b7b5-6495704e3571\") " pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.632988 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a5692d-28e5-45cd-85db-ba1dcef92b58-config\") pod \"route-controller-manager-6576b87f9c-vszrj\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633010 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sksb\" (UniqueName: \"kubernetes.io/projected/59157317-ce37-4d74-b7b5-6495704e3571-kube-api-access-4sksb\") pod \"console-operator-58897d9998-tkrtm\" (UID: \"59157317-ce37-4d74-b7b5-6495704e3571\") " pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633036 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-serving-cert\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633055 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-service-ca\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633073 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1c8049f-1b60-4e5c-a547-df42a78a841e-images\") pod \"machine-api-operator-5694c8668f-h7wkt\" (UID: \"e1c8049f-1b60-4e5c-a547-df42a78a841e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633072 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633098 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/83e6fe13-8779-4d8b-998e-75f7b39ea426-encryption-config\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633120 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44426\" (UniqueName: \"kubernetes.io/projected/e1c8049f-1b60-4e5c-a547-df42a78a841e-kube-api-access-44426\") pod \"machine-api-operator-5694c8668f-h7wkt\" (UID: \"e1c8049f-1b60-4e5c-a547-df42a78a841e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633149 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59157317-ce37-4d74-b7b5-6495704e3571-serving-cert\") pod \"console-operator-58897d9998-tkrtm\" (UID: \"59157317-ce37-4d74-b7b5-6495704e3571\") " pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633168 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-config\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633174 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633207 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4380fc4-40ae-4321-bd83-5dce3d68fbae-serving-cert\") pod \"openshift-config-operator-7777fb866f-lkjld\" (UID: \"e4380fc4-40ae-4321-bd83-5dce3d68fbae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633232 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45bq6\" (UniqueName: \"kubernetes.io/projected/cef45bcb-8e16-4f2b-95ce-0363efb53d7f-kube-api-access-45bq6\") pod \"openshift-apiserver-operator-796bbdcf4f-65vlt\" (UID: \"cef45bcb-8e16-4f2b-95ce-0363efb53d7f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633281 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633450 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633680 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633768 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.633934 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.634061 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.634151 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.634328 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.634477 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.634734 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.634855 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.635046 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.637394 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.637506 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8mvp9"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.641480 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.642838 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wkxcn"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.643578 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-87m8f"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.643937 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.644430 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-85mj8"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.644931 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.645267 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.645442 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wkxcn" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.645667 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.645845 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.651480 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.652989 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.653492 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.653833 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tkrtm"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.653928 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.654211 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.656067 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.656342 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.656597 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bgqwp"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.656649 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.656791 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.660287 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.661941 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mwtvl"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.662639 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zkmjw"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.663559 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.663697 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zkmjw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.664100 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mwtvl" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.667844 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.668174 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.668388 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.670836 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.672281 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.672428 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g9jhn"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.673580 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.674572 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.688234 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.688496 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.689755 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.690311 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.690742 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.695229 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.695251 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.697141 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.730527 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.730696 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.730772 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.731003 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.732873 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734724 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjghj\" (UniqueName: \"kubernetes.io/projected/e458d0aa-1771-4429-ba32-39cc22f3d638-kube-api-access-qjghj\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734761 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/784e6114-262f-4937-831c-d16945f48683-trusted-ca\") pod \"ingress-operator-5b745b69d9-d2rw5\" (UID: \"784e6114-262f-4937-831c-d16945f48683\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734788 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1c8049f-1b60-4e5c-a547-df42a78a841e-config\") pod \"machine-api-operator-5694c8668f-h7wkt\" (UID: \"e1c8049f-1b60-4e5c-a547-df42a78a841e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734805 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e4380fc4-40ae-4321-bd83-5dce3d68fbae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lkjld\" (UID: \"e4380fc4-40ae-4321-bd83-5dce3d68fbae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734821 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq5w7\" (UniqueName: \"kubernetes.io/projected/312df477-54e5-4ebc-bde0-ec291393ece9-kube-api-access-xq5w7\") pod \"cluster-samples-operator-665b6dd947-p7l54\" (UID: \"312df477-54e5-4ebc-bde0-ec291393ece9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734837 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk9kz\" (UniqueName: \"kubernetes.io/projected/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-kube-api-access-mk9kz\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734851 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/41702b15-de0c-4d6d-8096-4a86ab88d33d-auth-proxy-config\") pod \"machine-approver-56656f9798-vc8t7\" (UID: \"41702b15-de0c-4d6d-8096-4a86ab88d33d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734867 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t5f6\" (UniqueName: \"kubernetes.io/projected/d152ed50-3f92-49c8-80cc-e73e4046259e-kube-api-access-9t5f6\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734883 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e458d0aa-1771-4429-ba32-39cc22f3d638-stats-auth\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734899 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/312df477-54e5-4ebc-bde0-ec291393ece9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p7l54\" (UID: \"312df477-54e5-4ebc-bde0-ec291393ece9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734916 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83e6fe13-8779-4d8b-998e-75f7b39ea426-serving-cert\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734930 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83e6fe13-8779-4d8b-998e-75f7b39ea426-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734974 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2db7527f-a8bb-431d-ab1c-32c2278822aa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x746s\" (UID: \"2db7527f-a8bb-431d-ab1c-32c2278822aa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.734991 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e458d0aa-1771-4429-ba32-39cc22f3d638-service-ca-bundle\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735007 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735023 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfclt\" (UniqueName: \"kubernetes.io/projected/f221629d-987d-49fe-bcaf-2708f516eec8-kube-api-access-dfclt\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735038 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/784e6114-262f-4937-831c-d16945f48683-metrics-tls\") pod \"ingress-operator-5b745b69d9-d2rw5\" (UID: \"784e6114-262f-4937-831c-d16945f48683\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735053 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35ac638f-6298-404b-9503-ac7f3aa58e4d-serving-cert\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735075 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef45bcb-8e16-4f2b-95ce-0363efb53d7f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-65vlt\" (UID: \"cef45bcb-8e16-4f2b-95ce-0363efb53d7f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735104 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-config\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735130 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-service-ca-bundle\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735147 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/928c5e09-96f9-4f04-b797-e23c1efa1bcf-metrics-tls\") pod \"dns-operator-744455d44c-wkxcn\" (UID: \"928c5e09-96f9-4f04-b797-e23c1efa1bcf\") " pod="openshift-dns-operator/dns-operator-744455d44c-wkxcn" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735163 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85a5692d-28e5-45cd-85db-ba1dcef92b58-client-ca\") pod \"route-controller-manager-6576b87f9c-vszrj\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735180 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735199 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24256d22-e420-4442-9064-af05c357a072-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-whzj8\" (UID: \"24256d22-e420-4442-9064-af05c357a072\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735217 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/83e6fe13-8779-4d8b-998e-75f7b39ea426-audit-policies\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735232 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/83e6fe13-8779-4d8b-998e-75f7b39ea426-etcd-client\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735248 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735264 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5cjc\" (UniqueName: \"kubernetes.io/projected/928c5e09-96f9-4f04-b797-e23c1efa1bcf-kube-api-access-f5cjc\") pod \"dns-operator-744455d44c-wkxcn\" (UID: \"928c5e09-96f9-4f04-b797-e23c1efa1bcf\") " pod="openshift-dns-operator/dns-operator-744455d44c-wkxcn" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735279 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l8gb\" (UniqueName: \"kubernetes.io/projected/42e534a6-009e-460c-9664-483a1f93ce63-kube-api-access-9l8gb\") pod \"migrator-59844c95c7-zkmjw\" (UID: \"42e534a6-009e-460c-9664-483a1f93ce63\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zkmjw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735295 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e46f669-9cbc-482c-a124-17a007b3f203-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-77qqm\" (UID: \"1e46f669-9cbc-482c-a124-17a007b3f203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735311 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2736e97-3103-42dc-9d1d-a3bf1b4971ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gl9vw\" (UID: \"d2736e97-3103-42dc-9d1d-a3bf1b4971ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735326 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e458d0aa-1771-4429-ba32-39cc22f3d638-default-certificate\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735340 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e458d0aa-1771-4429-ba32-39cc22f3d638-metrics-certs\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735355 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/35ac638f-6298-404b-9503-ac7f3aa58e4d-etcd-ca\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735371 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd9d8\" (UniqueName: \"kubernetes.io/projected/35ac638f-6298-404b-9503-ac7f3aa58e4d-kube-api-access-pd9d8\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735387 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp72x\" (UniqueName: \"kubernetes.io/projected/83e6fe13-8779-4d8b-998e-75f7b39ea426-kube-api-access-bp72x\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735401 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-config\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735417 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735434 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw8v7\" (UniqueName: \"kubernetes.io/projected/ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02-kube-api-access-rw8v7\") pod \"openshift-controller-manager-operator-756b6f6bc6-d5cf9\" (UID: \"ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735480 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c06728-b146-4ce3-b975-81e0431c9b38-config\") pod \"kube-apiserver-operator-766d6c64bb-cqnnx\" (UID: \"a9c06728-b146-4ce3-b975-81e0431c9b38\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735497 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/41702b15-de0c-4d6d-8096-4a86ab88d33d-machine-approver-tls\") pod \"machine-approver-56656f9798-vc8t7\" (UID: \"41702b15-de0c-4d6d-8096-4a86ab88d33d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735511 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59157317-ce37-4d74-b7b5-6495704e3571-trusted-ca\") pod \"console-operator-58897d9998-tkrtm\" (UID: \"59157317-ce37-4d74-b7b5-6495704e3571\") " pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735537 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24256d22-e420-4442-9064-af05c357a072-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-whzj8\" (UID: \"24256d22-e420-4442-9064-af05c357a072\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735568 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/91e87770-5e80-48f8-b274-31b0399b9935-etcd-client\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735592 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-etcd-serving-ca\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735606 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-serving-cert\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735621 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a5692d-28e5-45cd-85db-ba1dcef92b58-config\") pod \"route-controller-manager-6576b87f9c-vszrj\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735634 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sksb\" (UniqueName: \"kubernetes.io/projected/59157317-ce37-4d74-b7b5-6495704e3571-kube-api-access-4sksb\") pod \"console-operator-58897d9998-tkrtm\" (UID: \"59157317-ce37-4d74-b7b5-6495704e3571\") " pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735650 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e46f669-9cbc-482c-a124-17a007b3f203-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-77qqm\" (UID: \"1e46f669-9cbc-482c-a124-17a007b3f203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735666 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-service-ca\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735680 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1c8049f-1b60-4e5c-a547-df42a78a841e-images\") pod \"machine-api-operator-5694c8668f-h7wkt\" (UID: \"e1c8049f-1b60-4e5c-a547-df42a78a841e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735696 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-config\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735711 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6402576-676e-4b71-9634-6614fd9a177f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmd7l\" (UID: \"d6402576-676e-4b71-9634-6614fd9a177f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735726 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/83e6fe13-8779-4d8b-998e-75f7b39ea426-encryption-config\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735742 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44426\" (UniqueName: \"kubernetes.io/projected/e1c8049f-1b60-4e5c-a547-df42a78a841e-kube-api-access-44426\") pod \"machine-api-operator-5694c8668f-h7wkt\" (UID: \"e1c8049f-1b60-4e5c-a547-df42a78a841e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735758 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59157317-ce37-4d74-b7b5-6495704e3571-serving-cert\") pod \"console-operator-58897d9998-tkrtm\" (UID: \"59157317-ce37-4d74-b7b5-6495704e3571\") " pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735772 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735794 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-config\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735809 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-audit-policies\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735823 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f221629d-987d-49fe-bcaf-2708f516eec8-audit-dir\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735838 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-d5cf9\" (UID: \"ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735852 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24256d22-e420-4442-9064-af05c357a072-config\") pod \"kube-controller-manager-operator-78b949d7b-whzj8\" (UID: \"24256d22-e420-4442-9064-af05c357a072\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735866 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-client-ca\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735882 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4380fc4-40ae-4321-bd83-5dce3d68fbae-serving-cert\") pod \"openshift-config-operator-7777fb866f-lkjld\" (UID: \"e4380fc4-40ae-4321-bd83-5dce3d68fbae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735898 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45bq6\" (UniqueName: \"kubernetes.io/projected/cef45bcb-8e16-4f2b-95ce-0363efb53d7f-kube-api-access-45bq6\") pod \"openshift-apiserver-operator-796bbdcf4f-65vlt\" (UID: \"cef45bcb-8e16-4f2b-95ce-0363efb53d7f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735915 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-serving-cert\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735931 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735959 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9c06728-b146-4ce3-b975-81e0431c9b38-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cqnnx\" (UID: \"a9c06728-b146-4ce3-b975-81e0431c9b38\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735976 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xwxf\" (UniqueName: \"kubernetes.io/projected/d2736e97-3103-42dc-9d1d-a3bf1b4971ec-kube-api-access-4xwxf\") pod \"cluster-image-registry-operator-dc59b4c8b-gl9vw\" (UID: \"d2736e97-3103-42dc-9d1d-a3bf1b4971ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.735992 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736008 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736024 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-d5cf9\" (UID: \"ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736039 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef45bcb-8e16-4f2b-95ce-0363efb53d7f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-65vlt\" (UID: \"cef45bcb-8e16-4f2b-95ce-0363efb53d7f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736055 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-trusted-ca-bundle\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736069 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91e87770-5e80-48f8-b274-31b0399b9935-audit-dir\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736084 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e46f669-9cbc-482c-a124-17a007b3f203-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-77qqm\" (UID: \"1e46f669-9cbc-482c-a124-17a007b3f203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736100 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6402576-676e-4b71-9634-6614fd9a177f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmd7l\" (UID: \"d6402576-676e-4b71-9634-6614fd9a177f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736116 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736132 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736148 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9vk\" (UniqueName: \"kubernetes.io/projected/784e6114-262f-4937-831c-d16945f48683-kube-api-access-lm9vk\") pod \"ingress-operator-5b745b69d9-d2rw5\" (UID: \"784e6114-262f-4937-831c-d16945f48683\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736180 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d152ed50-3f92-49c8-80cc-e73e4046259e-serving-cert\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736197 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-audit\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736213 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jwgb\" (UniqueName: \"kubernetes.io/projected/41702b15-de0c-4d6d-8096-4a86ab88d33d-kube-api-access-5jwgb\") pod \"machine-approver-56656f9798-vc8t7\" (UID: \"41702b15-de0c-4d6d-8096-4a86ab88d33d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736228 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/83e6fe13-8779-4d8b-998e-75f7b39ea426-audit-dir\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736242 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/91e87770-5e80-48f8-b274-31b0399b9935-node-pullsecrets\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736258 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736274 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91e87770-5e80-48f8-b274-31b0399b9935-serving-cert\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736289 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/91e87770-5e80-48f8-b274-31b0399b9935-encryption-config\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736325 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736342 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-oauth-serving-cert\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736357 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e1c8049f-1b60-4e5c-a547-df42a78a841e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h7wkt\" (UID: \"e1c8049f-1b60-4e5c-a547-df42a78a841e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736373 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqm8k\" (UniqueName: \"kubernetes.io/projected/e4380fc4-40ae-4321-bd83-5dce3d68fbae-kube-api-access-fqm8k\") pod \"openshift-config-operator-7777fb866f-lkjld\" (UID: \"e4380fc4-40ae-4321-bd83-5dce3d68fbae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736389 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-image-import-ca\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736404 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nm9j\" (UniqueName: \"kubernetes.io/projected/066f98b0-80a0-4cdd-ada3-76a1ebab23de-kube-api-access-5nm9j\") pod \"downloads-7954f5f757-2bmrg\" (UID: \"066f98b0-80a0-4cdd-ada3-76a1ebab23de\") " pod="openshift-console/downloads-7954f5f757-2bmrg" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736420 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68tnw\" (UniqueName: \"kubernetes.io/projected/91e87770-5e80-48f8-b274-31b0399b9935-kube-api-access-68tnw\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736443 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/83e6fe13-8779-4d8b-998e-75f7b39ea426-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736459 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-oauth-config\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736477 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/784e6114-262f-4937-831c-d16945f48683-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d2rw5\" (UID: \"784e6114-262f-4937-831c-d16945f48683\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736494 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkt76\" (UniqueName: \"kubernetes.io/projected/dab2d02c-8e81-40c5-a5ca-98be1833702e-kube-api-access-kkt76\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736509 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59157317-ce37-4d74-b7b5-6495704e3571-config\") pod \"console-operator-58897d9998-tkrtm\" (UID: \"59157317-ce37-4d74-b7b5-6495704e3571\") " pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736586 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2736e97-3103-42dc-9d1d-a3bf1b4971ec-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gl9vw\" (UID: \"d2736e97-3103-42dc-9d1d-a3bf1b4971ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736722 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tmll\" (UniqueName: \"kubernetes.io/projected/85a5692d-28e5-45cd-85db-ba1dcef92b58-kube-api-access-2tmll\") pod \"route-controller-manager-6576b87f9c-vszrj\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736743 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41702b15-de0c-4d6d-8096-4a86ab88d33d-config\") pod \"machine-approver-56656f9798-vc8t7\" (UID: \"41702b15-de0c-4d6d-8096-4a86ab88d33d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736817 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbhh7\" (UniqueName: \"kubernetes.io/projected/2db7527f-a8bb-431d-ab1c-32c2278822aa-kube-api-access-pbhh7\") pod \"control-plane-machine-set-operator-78cbb6b69f-x746s\" (UID: \"2db7527f-a8bb-431d-ab1c-32c2278822aa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736833 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35ac638f-6298-404b-9503-ac7f3aa58e4d-config\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736869 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z5kw\" (UniqueName: \"kubernetes.io/projected/d6402576-676e-4b71-9634-6614fd9a177f-kube-api-access-9z5kw\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmd7l\" (UID: \"d6402576-676e-4b71-9634-6614fd9a177f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736971 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.736987 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35ac638f-6298-404b-9503-ac7f3aa58e4d-etcd-client\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.737004 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a5692d-28e5-45cd-85db-ba1dcef92b58-serving-cert\") pod \"route-controller-manager-6576b87f9c-vszrj\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.737041 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.737058 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/35ac638f-6298-404b-9503-ac7f3aa58e4d-etcd-service-ca\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.737074 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9c06728-b146-4ce3-b975-81e0431c9b38-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cqnnx\" (UID: \"a9c06728-b146-4ce3-b975-81e0431c9b38\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.737109 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2736e97-3103-42dc-9d1d-a3bf1b4971ec-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gl9vw\" (UID: \"d2736e97-3103-42dc-9d1d-a3bf1b4971ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.738078 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.738181 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.738654 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.738729 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.738791 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.738909 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.739067 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1c8049f-1b60-4e5c-a547-df42a78a841e-config\") pod \"machine-api-operator-5694c8668f-h7wkt\" (UID: \"e1c8049f-1b60-4e5c-a547-df42a78a841e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.739098 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.739469 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-config\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.740075 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/83e6fe13-8779-4d8b-998e-75f7b39ea426-audit-policies\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.741485 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef45bcb-8e16-4f2b-95ce-0363efb53d7f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-65vlt\" (UID: \"cef45bcb-8e16-4f2b-95ce-0363efb53d7f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.742115 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/83e6fe13-8779-4d8b-998e-75f7b39ea426-encryption-config\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.742168 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/83e6fe13-8779-4d8b-998e-75f7b39ea426-audit-dir\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.742206 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/91e87770-5e80-48f8-b274-31b0399b9935-node-pullsecrets\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.742390 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/83e6fe13-8779-4d8b-998e-75f7b39ea426-etcd-client\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.739468 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e4380fc4-40ae-4321-bd83-5dce3d68fbae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lkjld\" (UID: \"e4380fc4-40ae-4321-bd83-5dce3d68fbae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.743897 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/41702b15-de0c-4d6d-8096-4a86ab88d33d-auth-proxy-config\") pod \"machine-approver-56656f9798-vc8t7\" (UID: \"41702b15-de0c-4d6d-8096-4a86ab88d33d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.744121 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/91e87770-5e80-48f8-b274-31b0399b9935-etcd-client\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.744380 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91e87770-5e80-48f8-b274-31b0399b9935-serving-cert\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.744437 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59157317-ce37-4d74-b7b5-6495704e3571-serving-cert\") pod \"console-operator-58897d9998-tkrtm\" (UID: \"59157317-ce37-4d74-b7b5-6495704e3571\") " pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.744920 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-config\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.745216 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59157317-ce37-4d74-b7b5-6495704e3571-trusted-ca\") pod \"console-operator-58897d9998-tkrtm\" (UID: \"59157317-ce37-4d74-b7b5-6495704e3571\") " pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.746058 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a5692d-28e5-45cd-85db-ba1dcef92b58-config\") pod \"route-controller-manager-6576b87f9c-vszrj\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.746482 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-etcd-serving-ca\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.748226 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/41702b15-de0c-4d6d-8096-4a86ab88d33d-machine-approver-tls\") pod \"machine-approver-56656f9798-vc8t7\" (UID: \"41702b15-de0c-4d6d-8096-4a86ab88d33d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.748402 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.748484 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/91e87770-5e80-48f8-b274-31b0399b9935-encryption-config\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.749126 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91e87770-5e80-48f8-b274-31b0399b9935-audit-dir\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.749207 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85a5692d-28e5-45cd-85db-ba1dcef92b58-client-ca\") pod \"route-controller-manager-6576b87f9c-vszrj\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.749666 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-service-ca-bundle\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.749930 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83e6fe13-8779-4d8b-998e-75f7b39ea426-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.750417 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1c8049f-1b60-4e5c-a547-df42a78a841e-images\") pod \"machine-api-operator-5694c8668f-h7wkt\" (UID: \"e1c8049f-1b60-4e5c-a547-df42a78a841e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.750805 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4380fc4-40ae-4321-bd83-5dce3d68fbae-serving-cert\") pod \"openshift-config-operator-7777fb866f-lkjld\" (UID: \"e4380fc4-40ae-4321-bd83-5dce3d68fbae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.751447 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-audit\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.751491 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef45bcb-8e16-4f2b-95ce-0363efb53d7f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-65vlt\" (UID: \"cef45bcb-8e16-4f2b-95ce-0363efb53d7f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.752052 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83e6fe13-8779-4d8b-998e-75f7b39ea426-serving-cert\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.752578 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/83e6fe13-8779-4d8b-998e-75f7b39ea426-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.753869 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e1c8049f-1b60-4e5c-a547-df42a78a841e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h7wkt\" (UID: \"e1c8049f-1b60-4e5c-a547-df42a78a841e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.753912 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-image-import-ca\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.754608 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59157317-ce37-4d74-b7b5-6495704e3571-config\") pod \"console-operator-58897d9998-tkrtm\" (UID: \"59157317-ce37-4d74-b7b5-6495704e3571\") " pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.755803 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.757854 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41702b15-de0c-4d6d-8096-4a86ab88d33d-config\") pod \"machine-approver-56656f9798-vc8t7\" (UID: \"41702b15-de0c-4d6d-8096-4a86ab88d33d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.757919 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-serving-cert\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.759287 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a5692d-28e5-45cd-85db-ba1dcef92b58-serving-cert\") pod \"route-controller-manager-6576b87f9c-vszrj\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.761063 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.761725 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.762102 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.762236 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.762342 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.762465 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.762835 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.763249 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-config\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.763390 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.763438 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.763501 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.763601 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.763701 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.763818 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.764008 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.764540 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.765182 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.765483 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.765711 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.765921 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.766340 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.766556 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.766741 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.766883 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.767153 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.766554 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.769234 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.772712 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.772870 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.773206 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.773415 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.773419 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2bmrg"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.773458 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.774386 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-oauth-serving-cert\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.774428 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p776r"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.774458 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-service-ca\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.775190 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91e87770-5e80-48f8-b274-31b0399b9935-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.775366 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.775465 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-trusted-ca-bundle\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.788238 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-oauth-config\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.788797 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.791703 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-blplz"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.792537 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.793084 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.794510 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-blplz" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.799078 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-serving-cert\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.799190 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.799534 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/312df477-54e5-4ebc-bde0-ec291393ece9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p7l54\" (UID: \"312df477-54e5-4ebc-bde0-ec291393ece9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.799848 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hfmnk"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.800669 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.801121 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.803126 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.803603 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.804131 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.805273 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v7nml"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.806762 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wkxcn"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.807662 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.809245 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ck76z"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.809526 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.811436 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ln492"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.812489 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.813394 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.814351 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.815481 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-85mj8"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.816417 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lkjld"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.817363 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8mvp9"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.818271 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g9jhn"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.819225 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mwtvl"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.820162 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fk4vq"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.821128 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h7wkt"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.822048 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.823562 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.824967 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.825897 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xkv6l"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.826836 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hfmnk"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.826840 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xkv6l" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.827864 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.829981 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.830855 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.831770 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.832650 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.834374 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-blplz"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.835452 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xkv6l"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.836472 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838423 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838442 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zkmjw"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838461 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm9vk\" (UniqueName: \"kubernetes.io/projected/784e6114-262f-4937-831c-d16945f48683-kube-api-access-lm9vk\") pod \"ingress-operator-5b745b69d9-d2rw5\" (UID: \"784e6114-262f-4937-831c-d16945f48683\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838491 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d152ed50-3f92-49c8-80cc-e73e4046259e-serving-cert\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838539 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838567 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838592 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838650 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/784e6114-262f-4937-831c-d16945f48683-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d2rw5\" (UID: \"784e6114-262f-4937-831c-d16945f48683\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838674 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2736e97-3103-42dc-9d1d-a3bf1b4971ec-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gl9vw\" (UID: \"d2736e97-3103-42dc-9d1d-a3bf1b4971ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838721 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbhh7\" (UniqueName: \"kubernetes.io/projected/2db7527f-a8bb-431d-ab1c-32c2278822aa-kube-api-access-pbhh7\") pod \"control-plane-machine-set-operator-78cbb6b69f-x746s\" (UID: \"2db7527f-a8bb-431d-ab1c-32c2278822aa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838743 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35ac638f-6298-404b-9503-ac7f3aa58e4d-config\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838768 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z5kw\" (UniqueName: \"kubernetes.io/projected/d6402576-676e-4b71-9634-6614fd9a177f-kube-api-access-9z5kw\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmd7l\" (UID: \"d6402576-676e-4b71-9634-6614fd9a177f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838789 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838810 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35ac638f-6298-404b-9503-ac7f3aa58e4d-etcd-client\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838834 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838856 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/35ac638f-6298-404b-9503-ac7f3aa58e4d-etcd-service-ca\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838877 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9c06728-b146-4ce3-b975-81e0431c9b38-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cqnnx\" (UID: \"a9c06728-b146-4ce3-b975-81e0431c9b38\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838897 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2736e97-3103-42dc-9d1d-a3bf1b4971ec-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gl9vw\" (UID: \"d2736e97-3103-42dc-9d1d-a3bf1b4971ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.838983 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/784e6114-262f-4937-831c-d16945f48683-trusted-ca\") pod \"ingress-operator-5b745b69d9-d2rw5\" (UID: \"784e6114-262f-4937-831c-d16945f48683\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839008 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjghj\" (UniqueName: \"kubernetes.io/projected/e458d0aa-1771-4429-ba32-39cc22f3d638-kube-api-access-qjghj\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839051 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t5f6\" (UniqueName: \"kubernetes.io/projected/d152ed50-3f92-49c8-80cc-e73e4046259e-kube-api-access-9t5f6\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839074 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e458d0aa-1771-4429-ba32-39cc22f3d638-stats-auth\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839102 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2db7527f-a8bb-431d-ab1c-32c2278822aa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x746s\" (UID: \"2db7527f-a8bb-431d-ab1c-32c2278822aa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839125 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e458d0aa-1771-4429-ba32-39cc22f3d638-service-ca-bundle\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839146 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfclt\" (UniqueName: \"kubernetes.io/projected/f221629d-987d-49fe-bcaf-2708f516eec8-kube-api-access-dfclt\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839169 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/784e6114-262f-4937-831c-d16945f48683-metrics-tls\") pod \"ingress-operator-5b745b69d9-d2rw5\" (UID: \"784e6114-262f-4937-831c-d16945f48683\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839189 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35ac638f-6298-404b-9503-ac7f3aa58e4d-serving-cert\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839210 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839233 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/928c5e09-96f9-4f04-b797-e23c1efa1bcf-metrics-tls\") pod \"dns-operator-744455d44c-wkxcn\" (UID: \"928c5e09-96f9-4f04-b797-e23c1efa1bcf\") " pod="openshift-dns-operator/dns-operator-744455d44c-wkxcn" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839260 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24256d22-e420-4442-9064-af05c357a072-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-whzj8\" (UID: \"24256d22-e420-4442-9064-af05c357a072\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839288 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839308 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5cjc\" (UniqueName: \"kubernetes.io/projected/928c5e09-96f9-4f04-b797-e23c1efa1bcf-kube-api-access-f5cjc\") pod \"dns-operator-744455d44c-wkxcn\" (UID: \"928c5e09-96f9-4f04-b797-e23c1efa1bcf\") " pod="openshift-dns-operator/dns-operator-744455d44c-wkxcn" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839332 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e46f669-9cbc-482c-a124-17a007b3f203-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-77qqm\" (UID: \"1e46f669-9cbc-482c-a124-17a007b3f203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839354 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2736e97-3103-42dc-9d1d-a3bf1b4971ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gl9vw\" (UID: \"d2736e97-3103-42dc-9d1d-a3bf1b4971ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839376 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e458d0aa-1771-4429-ba32-39cc22f3d638-default-certificate\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839399 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e458d0aa-1771-4429-ba32-39cc22f3d638-metrics-certs\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839419 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/35ac638f-6298-404b-9503-ac7f3aa58e4d-etcd-ca\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839441 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd9d8\" (UniqueName: \"kubernetes.io/projected/35ac638f-6298-404b-9503-ac7f3aa58e4d-kube-api-access-pd9d8\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839462 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l8gb\" (UniqueName: \"kubernetes.io/projected/42e534a6-009e-460c-9664-483a1f93ce63-kube-api-access-9l8gb\") pod \"migrator-59844c95c7-zkmjw\" (UID: \"42e534a6-009e-460c-9664-483a1f93ce63\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zkmjw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839490 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839514 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw8v7\" (UniqueName: \"kubernetes.io/projected/ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02-kube-api-access-rw8v7\") pod \"openshift-controller-manager-operator-756b6f6bc6-d5cf9\" (UID: \"ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839523 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839547 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c06728-b146-4ce3-b975-81e0431c9b38-config\") pod \"kube-apiserver-operator-766d6c64bb-cqnnx\" (UID: \"a9c06728-b146-4ce3-b975-81e0431c9b38\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839573 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839580 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24256d22-e420-4442-9064-af05c357a072-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-whzj8\" (UID: \"24256d22-e420-4442-9064-af05c357a072\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839663 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e46f669-9cbc-482c-a124-17a007b3f203-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-77qqm\" (UID: \"1e46f669-9cbc-482c-a124-17a007b3f203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839689 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-config\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839711 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6402576-676e-4b71-9634-6614fd9a177f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmd7l\" (UID: \"d6402576-676e-4b71-9634-6614fd9a177f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839731 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839763 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-audit-policies\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839781 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f221629d-987d-49fe-bcaf-2708f516eec8-audit-dir\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839798 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24256d22-e420-4442-9064-af05c357a072-config\") pod \"kube-controller-manager-operator-78b949d7b-whzj8\" (UID: \"24256d22-e420-4442-9064-af05c357a072\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839814 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-client-ca\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839832 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-d5cf9\" (UID: \"ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839858 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839874 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9c06728-b146-4ce3-b975-81e0431c9b38-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cqnnx\" (UID: \"a9c06728-b146-4ce3-b975-81e0431c9b38\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839891 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xwxf\" (UniqueName: \"kubernetes.io/projected/d2736e97-3103-42dc-9d1d-a3bf1b4971ec-kube-api-access-4xwxf\") pod \"cluster-image-registry-operator-dc59b4c8b-gl9vw\" (UID: \"d2736e97-3103-42dc-9d1d-a3bf1b4971ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839908 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-d5cf9\" (UID: \"ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839925 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839944 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e46f669-9cbc-482c-a124-17a007b3f203-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-77qqm\" (UID: \"1e46f669-9cbc-482c-a124-17a007b3f203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.839976 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6402576-676e-4b71-9634-6614fd9a177f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmd7l\" (UID: \"d6402576-676e-4b71-9634-6614fd9a177f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.840924 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.841230 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-config\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.841242 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f221629d-987d-49fe-bcaf-2708f516eec8-audit-dir\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.841836 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-d5cf9\" (UID: \"ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.841921 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.842112 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35ac638f-6298-404b-9503-ac7f3aa58e4d-config\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.842133 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-audit-policies\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.842435 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-client-ca\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.843501 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.843738 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d152ed50-3f92-49c8-80cc-e73e4046259e-serving-cert\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.844074 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.844219 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p776r"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.844240 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.844251 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-4gkcm"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.844292 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/35ac638f-6298-404b-9503-ac7f3aa58e4d-etcd-ca\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.844783 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4gkcm" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.844803 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.845725 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2736e97-3103-42dc-9d1d-a3bf1b4971ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gl9vw\" (UID: \"d2736e97-3103-42dc-9d1d-a3bf1b4971ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.846002 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.846358 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.847130 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.847228 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35ac638f-6298-404b-9503-ac7f3aa58e4d-serving-cert\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.847247 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.847370 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35ac638f-6298-404b-9503-ac7f3aa58e4d-etcd-client\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.847567 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.847609 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pqgfr"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.848584 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.848741 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pqgfr"] Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.849225 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-d5cf9\" (UID: \"ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.849473 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.850226 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.850930 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2736e97-3103-42dc-9d1d-a3bf1b4971ec-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gl9vw\" (UID: \"d2736e97-3103-42dc-9d1d-a3bf1b4971ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.852373 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.856436 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.857290 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/35ac638f-6298-404b-9503-ac7f3aa58e4d-etcd-service-ca\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.870489 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.890304 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.910647 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.932921 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.952437 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.958267 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/928c5e09-96f9-4f04-b797-e23c1efa1bcf-metrics-tls\") pod \"dns-operator-744455d44c-wkxcn\" (UID: \"928c5e09-96f9-4f04-b797-e23c1efa1bcf\") " pod="openshift-dns-operator/dns-operator-744455d44c-wkxcn" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.970734 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 05:23:35 crc kubenswrapper[5050]: I0131 05:23:35.990369 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.010055 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.019150 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e458d0aa-1771-4429-ba32-39cc22f3d638-default-certificate\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.030579 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.037838 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e458d0aa-1771-4429-ba32-39cc22f3d638-stats-auth\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.050340 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.057288 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e458d0aa-1771-4429-ba32-39cc22f3d638-metrics-certs\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.070501 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.077888 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e458d0aa-1771-4429-ba32-39cc22f3d638-service-ca-bundle\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.090749 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.111065 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.131478 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.141022 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e46f669-9cbc-482c-a124-17a007b3f203-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-77qqm\" (UID: \"1e46f669-9cbc-482c-a124-17a007b3f203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.150494 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.162090 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e46f669-9cbc-482c-a124-17a007b3f203-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-77qqm\" (UID: \"1e46f669-9cbc-482c-a124-17a007b3f203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.170822 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.190458 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.202117 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2db7527f-a8bb-431d-ab1c-32c2278822aa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x746s\" (UID: \"2db7527f-a8bb-431d-ab1c-32c2278822aa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.211082 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.221114 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6402576-676e-4b71-9634-6614fd9a177f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmd7l\" (UID: \"d6402576-676e-4b71-9634-6614fd9a177f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.231738 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.252194 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.271740 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.291226 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.311821 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.331788 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.340994 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24256d22-e420-4442-9064-af05c357a072-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-whzj8\" (UID: \"24256d22-e420-4442-9064-af05c357a072\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.351216 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.352151 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24256d22-e420-4442-9064-af05c357a072-config\") pod \"kube-controller-manager-operator-78b949d7b-whzj8\" (UID: \"24256d22-e420-4442-9064-af05c357a072\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.370914 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.391313 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.400719 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9c06728-b146-4ce3-b975-81e0431c9b38-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cqnnx\" (UID: \"a9c06728-b146-4ce3-b975-81e0431c9b38\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.410461 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.416307 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c06728-b146-4ce3-b975-81e0431c9b38-config\") pod \"kube-apiserver-operator-766d6c64bb-cqnnx\" (UID: \"a9c06728-b146-4ce3-b975-81e0431c9b38\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.431096 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.451330 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.460214 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/784e6114-262f-4937-831c-d16945f48683-metrics-tls\") pod \"ingress-operator-5b745b69d9-d2rw5\" (UID: \"784e6114-262f-4937-831c-d16945f48683\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.471071 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.500737 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.505221 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/784e6114-262f-4937-831c-d16945f48683-trusted-ca\") pod \"ingress-operator-5b745b69d9-d2rw5\" (UID: \"784e6114-262f-4937-831c-d16945f48683\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.510827 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.530855 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.539463 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6402576-676e-4b71-9634-6614fd9a177f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmd7l\" (UID: \"d6402576-676e-4b71-9634-6614fd9a177f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.551316 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.570902 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.591444 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.611559 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.631144 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.668675 5050 request.go:700] Waited for 1.003351947s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.670752 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.691177 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.711179 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.731393 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.761854 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.770194 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.791413 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.810295 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.831364 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.851477 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.883949 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jwgb\" (UniqueName: \"kubernetes.io/projected/41702b15-de0c-4d6d-8096-4a86ab88d33d-kube-api-access-5jwgb\") pod \"machine-approver-56656f9798-vc8t7\" (UID: \"41702b15-de0c-4d6d-8096-4a86ab88d33d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.908242 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44426\" (UniqueName: \"kubernetes.io/projected/e1c8049f-1b60-4e5c-a547-df42a78a841e-kube-api-access-44426\") pod \"machine-api-operator-5694c8668f-h7wkt\" (UID: \"e1c8049f-1b60-4e5c-a547-df42a78a841e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.947781 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp72x\" (UniqueName: \"kubernetes.io/projected/83e6fe13-8779-4d8b-998e-75f7b39ea426-kube-api-access-bp72x\") pod \"apiserver-7bbb656c7d-lm2gr\" (UID: \"83e6fe13-8779-4d8b-998e-75f7b39ea426\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.969224 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq5w7\" (UniqueName: \"kubernetes.io/projected/312df477-54e5-4ebc-bde0-ec291393ece9-kube-api-access-xq5w7\") pod \"cluster-samples-operator-665b6dd947-p7l54\" (UID: \"312df477-54e5-4ebc-bde0-ec291393ece9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54" Jan 31 05:23:36 crc kubenswrapper[5050]: I0131 05:23:36.994559 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk9kz\" (UniqueName: \"kubernetes.io/projected/d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac-kube-api-access-mk9kz\") pod \"authentication-operator-69f744f599-bgqwp\" (UID: \"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.011618 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.015962 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sksb\" (UniqueName: \"kubernetes.io/projected/59157317-ce37-4d74-b7b5-6495704e3571-kube-api-access-4sksb\") pod \"console-operator-58897d9998-tkrtm\" (UID: \"59157317-ce37-4d74-b7b5-6495704e3571\") " pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.028133 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.035732 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45bq6\" (UniqueName: \"kubernetes.io/projected/cef45bcb-8e16-4f2b-95ce-0363efb53d7f-kube-api-access-45bq6\") pod \"openshift-apiserver-operator-796bbdcf4f-65vlt\" (UID: \"cef45bcb-8e16-4f2b-95ce-0363efb53d7f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.040702 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.050663 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqm8k\" (UniqueName: \"kubernetes.io/projected/e4380fc4-40ae-4321-bd83-5dce3d68fbae-kube-api-access-fqm8k\") pod \"openshift-config-operator-7777fb866f-lkjld\" (UID: \"e4380fc4-40ae-4321-bd83-5dce3d68fbae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.061343 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.072191 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.082184 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nm9j\" (UniqueName: \"kubernetes.io/projected/066f98b0-80a0-4cdd-ada3-76a1ebab23de-kube-api-access-5nm9j\") pod \"downloads-7954f5f757-2bmrg\" (UID: \"066f98b0-80a0-4cdd-ada3-76a1ebab23de\") " pod="openshift-console/downloads-7954f5f757-2bmrg" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.096822 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68tnw\" (UniqueName: \"kubernetes.io/projected/91e87770-5e80-48f8-b274-31b0399b9935-kube-api-access-68tnw\") pod \"apiserver-76f77b778f-v7nml\" (UID: \"91e87770-5e80-48f8-b274-31b0399b9935\") " pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.110687 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkt76\" (UniqueName: \"kubernetes.io/projected/dab2d02c-8e81-40c5-a5ca-98be1833702e-kube-api-access-kkt76\") pod \"console-f9d7485db-fk4vq\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.120935 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.131166 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.131573 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tmll\" (UniqueName: \"kubernetes.io/projected/85a5692d-28e5-45cd-85db-ba1dcef92b58-kube-api-access-2tmll\") pod \"route-controller-manager-6576b87f9c-vszrj\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.150573 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.153792 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.170772 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.171270 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.190827 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.210075 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.230275 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.235362 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2bmrg" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.250311 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.271983 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.292832 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.294335 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.310944 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.330895 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.337078 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.347629 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.353769 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.377454 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.389858 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.412287 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.432296 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.450599 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.461364 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bgqwp"] Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.468882 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tkrtm"] Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.472239 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.473734 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj"] Jan 31 05:23:37 crc kubenswrapper[5050]: W0131 05:23:37.480584 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59157317_ce37_4d74_b7b5_6495704e3571.slice/crio-c3b23b0ac4eb554b100a9c5d2a16239510c07293da704f0b9eb92210c7c60c27 WatchSource:0}: Error finding container c3b23b0ac4eb554b100a9c5d2a16239510c07293da704f0b9eb92210c7c60c27: Status 404 returned error can't find the container with id c3b23b0ac4eb554b100a9c5d2a16239510c07293da704f0b9eb92210c7c60c27 Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.490460 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 05:23:37 crc kubenswrapper[5050]: W0131 05:23:37.493807 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85a5692d_28e5_45cd_85db_ba1dcef92b58.slice/crio-6816dad11c2c9fee4a803df74019368289cc823a2f199ee2ccc424dce2bd0606 WatchSource:0}: Error finding container 6816dad11c2c9fee4a803df74019368289cc823a2f199ee2ccc424dce2bd0606: Status 404 returned error can't find the container with id 6816dad11c2c9fee4a803df74019368289cc823a2f199ee2ccc424dce2bd0606 Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.509924 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.517459 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2bmrg"] Jan 31 05:23:37 crc kubenswrapper[5050]: W0131 05:23:37.528537 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod066f98b0_80a0_4cdd_ada3_76a1ebab23de.slice/crio-0135231d6345f0bedec5d30e0c5a256bbb5600a3b6e65e15ff3eec50bea6f2fa WatchSource:0}: Error finding container 0135231d6345f0bedec5d30e0c5a256bbb5600a3b6e65e15ff3eec50bea6f2fa: Status 404 returned error can't find the container with id 0135231d6345f0bedec5d30e0c5a256bbb5600a3b6e65e15ff3eec50bea6f2fa Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.529815 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.536221 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fk4vq"] Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.552775 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.554722 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54"] Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.574586 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt"] Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.574635 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.578233 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr"] Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.579191 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lkjld"] Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.580505 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h7wkt"] Jan 31 05:23:37 crc kubenswrapper[5050]: W0131 05:23:37.584085 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1c8049f_1b60_4e5c_a547_df42a78a841e.slice/crio-519e90932bfadf8e9fafff057e328d87465b81df4bab6248d3e867bbb483d7cd WatchSource:0}: Error finding container 519e90932bfadf8e9fafff057e328d87465b81df4bab6248d3e867bbb483d7cd: Status 404 returned error can't find the container with id 519e90932bfadf8e9fafff057e328d87465b81df4bab6248d3e867bbb483d7cd Jan 31 05:23:37 crc kubenswrapper[5050]: W0131 05:23:37.584415 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4380fc4_40ae_4321_bd83_5dce3d68fbae.slice/crio-40390fe145f58d09f83cbcbb3bcf4bb0b6e55ced5e8afc94c2bc7346d6bea405 WatchSource:0}: Error finding container 40390fe145f58d09f83cbcbb3bcf4bb0b6e55ced5e8afc94c2bc7346d6bea405: Status 404 returned error can't find the container with id 40390fe145f58d09f83cbcbb3bcf4bb0b6e55ced5e8afc94c2bc7346d6bea405 Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.591248 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 05:23:37 crc kubenswrapper[5050]: W0131 05:23:37.594244 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcef45bcb_8e16_4f2b_95ce_0363efb53d7f.slice/crio-2559699c4e10c830c51754902fe032fd12bb1d55e2d478055f16cf14a8099772 WatchSource:0}: Error finding container 2559699c4e10c830c51754902fe032fd12bb1d55e2d478055f16cf14a8099772: Status 404 returned error can't find the container with id 2559699c4e10c830c51754902fe032fd12bb1d55e2d478055f16cf14a8099772 Jan 31 05:23:37 crc kubenswrapper[5050]: W0131 05:23:37.596098 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83e6fe13_8779_4d8b_998e_75f7b39ea426.slice/crio-572e356022b2257eba2dcbc2c57f3284888d137bcae5107da306a45cebe559b2 WatchSource:0}: Error finding container 572e356022b2257eba2dcbc2c57f3284888d137bcae5107da306a45cebe559b2: Status 404 returned error can't find the container with id 572e356022b2257eba2dcbc2c57f3284888d137bcae5107da306a45cebe559b2 Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.610267 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.630839 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.631776 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" event={"ID":"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac","Type":"ContainerStarted","Data":"514b0e54cd352b8e023926473f8fdf561ded3818938ca66125d77e8097e3b7b3"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.631814 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" event={"ID":"d0a76e0b-79be-4aaa-a9d9-88bc3a2898ac","Type":"ContainerStarted","Data":"0ea37efc4c88462cd9095fbc218dc5eb2934d2f9d11b1e98e8eaa296ed5cb2d8"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.634551 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" event={"ID":"cef45bcb-8e16-4f2b-95ce-0363efb53d7f","Type":"ContainerStarted","Data":"2559699c4e10c830c51754902fe032fd12bb1d55e2d478055f16cf14a8099772"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.635881 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" event={"ID":"e1c8049f-1b60-4e5c-a547-df42a78a841e","Type":"ContainerStarted","Data":"519e90932bfadf8e9fafff057e328d87465b81df4bab6248d3e867bbb483d7cd"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.637593 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2bmrg" event={"ID":"066f98b0-80a0-4cdd-ada3-76a1ebab23de","Type":"ContainerStarted","Data":"0135231d6345f0bedec5d30e0c5a256bbb5600a3b6e65e15ff3eec50bea6f2fa"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.640255 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" event={"ID":"41702b15-de0c-4d6d-8096-4a86ab88d33d","Type":"ContainerStarted","Data":"7b109a3daada45f7b8966d9753d930e8c168c78c68df15464c6f18be73456a70"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.640293 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" event={"ID":"41702b15-de0c-4d6d-8096-4a86ab88d33d","Type":"ContainerStarted","Data":"afaf75a3a41a21de887bbbb12a2da89bbe625a80c6a1953b1be1c2a18f2d2e2a"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.642395 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" event={"ID":"83e6fe13-8779-4d8b-998e-75f7b39ea426","Type":"ContainerStarted","Data":"572e356022b2257eba2dcbc2c57f3284888d137bcae5107da306a45cebe559b2"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.645768 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" event={"ID":"e4380fc4-40ae-4321-bd83-5dce3d68fbae","Type":"ContainerStarted","Data":"40390fe145f58d09f83cbcbb3bcf4bb0b6e55ced5e8afc94c2bc7346d6bea405"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.650060 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" event={"ID":"85a5692d-28e5-45cd-85db-ba1dcef92b58","Type":"ContainerStarted","Data":"6816dad11c2c9fee4a803df74019368289cc823a2f199ee2ccc424dce2bd0606"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.650173 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.652394 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tkrtm" event={"ID":"59157317-ce37-4d74-b7b5-6495704e3571","Type":"ContainerStarted","Data":"7ea2cabc75e0b2339e36ff255497d992aea84d9dfbb634a843397664e9e3a9a9"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.652423 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tkrtm" event={"ID":"59157317-ce37-4d74-b7b5-6495704e3571","Type":"ContainerStarted","Data":"c3b23b0ac4eb554b100a9c5d2a16239510c07293da704f0b9eb92210c7c60c27"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.653152 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.654026 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fk4vq" event={"ID":"dab2d02c-8e81-40c5-a5ca-98be1833702e","Type":"ContainerStarted","Data":"28341ece364e875241f029a1fbb844c33c8b5db200de72a07f402ee1b4e93879"} Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.658129 5050 patch_prober.go:28] interesting pod/console-operator-58897d9998-tkrtm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.658194 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tkrtm" podUID="59157317-ce37-4d74-b7b5-6495704e3571" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.669303 5050 request.go:700] Waited for 1.842093724s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.671758 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.706938 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm9vk\" (UniqueName: \"kubernetes.io/projected/784e6114-262f-4937-831c-d16945f48683-kube-api-access-lm9vk\") pod \"ingress-operator-5b745b69d9-d2rw5\" (UID: \"784e6114-262f-4937-831c-d16945f48683\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.724192 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfclt\" (UniqueName: \"kubernetes.io/projected/f221629d-987d-49fe-bcaf-2708f516eec8-kube-api-access-dfclt\") pod \"oauth-openshift-558db77b4-ln492\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.743100 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/784e6114-262f-4937-831c-d16945f48683-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d2rw5\" (UID: \"784e6114-262f-4937-831c-d16945f48683\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.764053 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbhh7\" (UniqueName: \"kubernetes.io/projected/2db7527f-a8bb-431d-ab1c-32c2278822aa-kube-api-access-pbhh7\") pod \"control-plane-machine-set-operator-78cbb6b69f-x746s\" (UID: \"2db7527f-a8bb-431d-ab1c-32c2278822aa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.764213 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.789682 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2736e97-3103-42dc-9d1d-a3bf1b4971ec-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gl9vw\" (UID: \"d2736e97-3103-42dc-9d1d-a3bf1b4971ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.803231 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z5kw\" (UniqueName: \"kubernetes.io/projected/d6402576-676e-4b71-9634-6614fd9a177f-kube-api-access-9z5kw\") pod \"kube-storage-version-migrator-operator-b67b599dd-kmd7l\" (UID: \"d6402576-676e-4b71-9634-6614fd9a177f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.822165 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v7nml"] Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.823588 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e46f669-9cbc-482c-a124-17a007b3f203-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-77qqm\" (UID: \"1e46f669-9cbc-482c-a124-17a007b3f203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.849310 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjghj\" (UniqueName: \"kubernetes.io/projected/e458d0aa-1771-4429-ba32-39cc22f3d638-kube-api-access-qjghj\") pod \"router-default-5444994796-87m8f\" (UID: \"e458d0aa-1771-4429-ba32-39cc22f3d638\") " pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.867121 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd9d8\" (UniqueName: \"kubernetes.io/projected/35ac638f-6298-404b-9503-ac7f3aa58e4d-kube-api-access-pd9d8\") pod \"etcd-operator-b45778765-85mj8\" (UID: \"35ac638f-6298-404b-9503-ac7f3aa58e4d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.895791 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t5f6\" (UniqueName: \"kubernetes.io/projected/d152ed50-3f92-49c8-80cc-e73e4046259e-kube-api-access-9t5f6\") pod \"controller-manager-879f6c89f-ck76z\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.910671 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l8gb\" (UniqueName: \"kubernetes.io/projected/42e534a6-009e-460c-9664-483a1f93ce63-kube-api-access-9l8gb\") pod \"migrator-59844c95c7-zkmjw\" (UID: \"42e534a6-009e-460c-9664-483a1f93ce63\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zkmjw" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.913532 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.930047 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5"] Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.930786 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.950220 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 05:23:37 crc kubenswrapper[5050]: W0131 05:23:37.952635 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod784e6114_262f_4937_831c_d16945f48683.slice/crio-00ad94dbbddea5c95af94fc9ff198cab1bf2b1f7d574d7435de3f77848dc09e7 WatchSource:0}: Error finding container 00ad94dbbddea5c95af94fc9ff198cab1bf2b1f7d574d7435de3f77848dc09e7: Status 404 returned error can't find the container with id 00ad94dbbddea5c95af94fc9ff198cab1bf2b1f7d574d7435de3f77848dc09e7 Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.963505 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.970263 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.984789 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24256d22-e420-4442-9064-af05c357a072-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-whzj8\" (UID: \"24256d22-e420-4442-9064-af05c357a072\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" Jan 31 05:23:37 crc kubenswrapper[5050]: I0131 05:23:37.985069 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.004363 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.006001 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw8v7\" (UniqueName: \"kubernetes.io/projected/ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02-kube-api-access-rw8v7\") pod \"openshift-controller-manager-operator-756b6f6bc6-d5cf9\" (UID: \"ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.012661 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.020454 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.026267 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xwxf\" (UniqueName: \"kubernetes.io/projected/d2736e97-3103-42dc-9d1d-a3bf1b4971ec-kube-api-access-4xwxf\") pod \"cluster-image-registry-operator-dc59b4c8b-gl9vw\" (UID: \"d2736e97-3103-42dc-9d1d-a3bf1b4971ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:38 crc kubenswrapper[5050]: W0131 05:23:38.033472 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode458d0aa_1771_4429_ba32_39cc22f3d638.slice/crio-53f7b6a12c3ab33f84a13a43a76e1a1790a36e53c7c2672f5af8ee8402c7830a WatchSource:0}: Error finding container 53f7b6a12c3ab33f84a13a43a76e1a1790a36e53c7c2672f5af8ee8402c7830a: Status 404 returned error can't find the container with id 53f7b6a12c3ab33f84a13a43a76e1a1790a36e53c7c2672f5af8ee8402c7830a Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.045548 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.047296 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5cjc\" (UniqueName: \"kubernetes.io/projected/928c5e09-96f9-4f04-b797-e23c1efa1bcf-kube-api-access-f5cjc\") pod \"dns-operator-744455d44c-wkxcn\" (UID: \"928c5e09-96f9-4f04-b797-e23c1efa1bcf\") " pod="openshift-dns-operator/dns-operator-744455d44c-wkxcn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.057036 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.070515 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zkmjw" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.071497 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9c06728-b146-4ce3-b975-81e0431c9b38-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cqnnx\" (UID: \"a9c06728-b146-4ce3-b975-81e0431c9b38\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.076424 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.090029 5050 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.111636 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.182640 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82582675-89e4-4783-84df-ea11774c62aa-registry-certificates\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.182888 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccgnr\" (UniqueName: \"kubernetes.io/projected/c51760c2-79c8-4d25-99a6-bfb51d768be8-kube-api-access-ccgnr\") pod \"multus-admission-controller-857f4d67dd-mwtvl\" (UID: \"c51760c2-79c8-4d25-99a6-bfb51d768be8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mwtvl" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.182909 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82582675-89e4-4783-84df-ea11774c62aa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.182929 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82582675-89e4-4783-84df-ea11774c62aa-trusted-ca\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.182945 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-registry-tls\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.182982 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-bound-sa-token\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.182998 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c51760c2-79c8-4d25-99a6-bfb51d768be8-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mwtvl\" (UID: \"c51760c2-79c8-4d25-99a6-bfb51d768be8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mwtvl" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.183031 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.183050 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82582675-89e4-4783-84df-ea11774c62aa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.183076 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x74c\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-kube-api-access-7x74c\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: E0131 05:23:38.183435 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:38.683413005 +0000 UTC m=+143.732574601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.189817 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ck76z"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.257423 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.271709 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-85mj8"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.278489 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.283601 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:38 crc kubenswrapper[5050]: E0131 05:23:38.283800 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:38.783774723 +0000 UTC m=+143.832936319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.283908 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/576a09ad-58f8-4eb7-9d08-0e7183a4996b-node-bootstrap-token\") pod \"machine-config-server-4gkcm\" (UID: \"576a09ad-58f8-4eb7-9d08-0e7183a4996b\") " pod="openshift-machine-config-operator/machine-config-server-4gkcm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.283996 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef79cfe6-00eb-40e7-941f-4013514c4fd2-proxy-tls\") pod \"machine-config-operator-74547568cd-gcjhn\" (UID: \"ef79cfe6-00eb-40e7-941f-4013514c4fd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.284020 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jl7b\" (UniqueName: \"kubernetes.io/projected/9607a267-53c0-4432-b3aa-dd7d0e04ba77-kube-api-access-2jl7b\") pod \"dns-default-hfmnk\" (UID: \"9607a267-53c0-4432-b3aa-dd7d0e04ba77\") " pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.284056 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/199faf50-26fd-485c-871d-c4a2d9cc33e6-srv-cert\") pod \"catalog-operator-68c6474976-kn2nd\" (UID: \"199faf50-26fd-485c-871d-c4a2d9cc33e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.284088 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-csi-data-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.284105 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/842445b4-a5eb-48a6-b6c7-ba426d8fab6c-tmpfs\") pod \"packageserver-d55dfcdfc-ddnzh\" (UID: \"842445b4-a5eb-48a6-b6c7-ba426d8fab6c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.284163 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krfjg\" (UniqueName: \"kubernetes.io/projected/576a09ad-58f8-4eb7-9d08-0e7183a4996b-kube-api-access-krfjg\") pod \"machine-config-server-4gkcm\" (UID: \"576a09ad-58f8-4eb7-9d08-0e7183a4996b\") " pod="openshift-machine-config-operator/machine-config-server-4gkcm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.284184 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82582675-89e4-4783-84df-ea11774c62aa-registry-certificates\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.284229 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9607a267-53c0-4432-b3aa-dd7d0e04ba77-metrics-tls\") pod \"dns-default-hfmnk\" (UID: \"9607a267-53c0-4432-b3aa-dd7d0e04ba77\") " pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.284258 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24krw\" (UniqueName: \"kubernetes.io/projected/8a22853a-72dd-48ac-aca9-1761185740ba-kube-api-access-24krw\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.284370 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef79cfe6-00eb-40e7-941f-4013514c4fd2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gcjhn\" (UID: \"ef79cfe6-00eb-40e7-941f-4013514c4fd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.284387 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/842445b4-a5eb-48a6-b6c7-ba426d8fab6c-apiservice-cert\") pod \"packageserver-d55dfcdfc-ddnzh\" (UID: \"842445b4-a5eb-48a6-b6c7-ba426d8fab6c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.284407 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccgnr\" (UniqueName: \"kubernetes.io/projected/c51760c2-79c8-4d25-99a6-bfb51d768be8-kube-api-access-ccgnr\") pod \"multus-admission-controller-857f4d67dd-mwtvl\" (UID: \"c51760c2-79c8-4d25-99a6-bfb51d768be8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mwtvl" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.284503 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82582675-89e4-4783-84df-ea11774c62aa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.287061 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82582675-89e4-4783-84df-ea11774c62aa-trusted-ca\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.287176 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-registry-tls\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.287458 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82582675-89e4-4783-84df-ea11774c62aa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.287607 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/199faf50-26fd-485c-871d-c4a2d9cc33e6-profile-collector-cert\") pod \"catalog-operator-68c6474976-kn2nd\" (UID: \"199faf50-26fd-485c-871d-c4a2d9cc33e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.287978 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-bound-sa-token\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.288014 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8c36ad8-2c55-41d9-8bcc-8accc3501626-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g9jhn\" (UID: \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.288075 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7b4f1dd9-6c94-4551-a88f-2b83c154962a-signing-key\") pod \"service-ca-9c57cc56f-blplz\" (UID: \"7b4f1dd9-6c94-4551-a88f-2b83c154962a\") " pod="openshift-service-ca/service-ca-9c57cc56f-blplz" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.288216 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c51760c2-79c8-4d25-99a6-bfb51d768be8-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mwtvl\" (UID: \"c51760c2-79c8-4d25-99a6-bfb51d768be8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mwtvl" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.288239 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5915d8a1-8561-481b-990d-60cd35f30d7c-secret-volume\") pod \"collect-profiles-29497275-dzs5b\" (UID: \"5915d8a1-8561-481b-990d-60cd35f30d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.288255 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57e06675-0696-4d1e-9058-920532a96cdf-proxy-tls\") pod \"machine-config-controller-84d6567774-p776r\" (UID: \"57e06675-0696-4d1e-9058-920532a96cdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.288287 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82582675-89e4-4783-84df-ea11774c62aa-trusted-ca\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.288521 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8c36ad8-2c55-41d9-8bcc-8accc3501626-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g9jhn\" (UID: \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.288550 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ef79cfe6-00eb-40e7-941f-4013514c4fd2-images\") pod \"machine-config-operator-74547568cd-gcjhn\" (UID: \"ef79cfe6-00eb-40e7-941f-4013514c4fd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.288585 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsm64\" (UniqueName: \"kubernetes.io/projected/e4ab724a-e633-4639-938c-317c550ba114-kube-api-access-tsm64\") pod \"ingress-canary-xkv6l\" (UID: \"e4ab724a-e633-4639-938c-317c550ba114\") " pod="openshift-ingress-canary/ingress-canary-xkv6l" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.288629 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/43fee678-54c4-48f9-a194-720209531460-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wq7pt\" (UID: \"43fee678-54c4-48f9-a194-720209531460\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.289108 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.289135 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2222k\" (UniqueName: \"kubernetes.io/projected/ef79cfe6-00eb-40e7-941f-4013514c4fd2-kube-api-access-2222k\") pod \"machine-config-operator-74547568cd-gcjhn\" (UID: \"ef79cfe6-00eb-40e7-941f-4013514c4fd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.289163 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82582675-89e4-4783-84df-ea11774c62aa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.289299 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtxsm\" (UniqueName: \"kubernetes.io/projected/842445b4-a5eb-48a6-b6c7-ba426d8fab6c-kube-api-access-gtxsm\") pod \"packageserver-d55dfcdfc-ddnzh\" (UID: \"842445b4-a5eb-48a6-b6c7-ba426d8fab6c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:38 crc kubenswrapper[5050]: E0131 05:23:38.289752 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:38.789628486 +0000 UTC m=+143.838790082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.290129 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9291437-d019-4d82-99b5-6b7322ea1750-srv-cert\") pod \"olm-operator-6b444d44fb-fhsww\" (UID: \"a9291437-d019-4d82-99b5-6b7322ea1750\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.290305 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57e06675-0696-4d1e-9058-920532a96cdf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p776r\" (UID: \"57e06675-0696-4d1e-9058-920532a96cdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.289810 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82582675-89e4-4783-84df-ea11774c62aa-registry-certificates\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.291229 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzd47\" (UniqueName: \"kubernetes.io/projected/43fee678-54c4-48f9-a194-720209531460-kube-api-access-jzd47\") pod \"package-server-manager-789f6589d5-wq7pt\" (UID: \"43fee678-54c4-48f9-a194-720209531460\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.291273 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x74c\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-kube-api-access-7x74c\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.291290 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5915d8a1-8561-481b-990d-60cd35f30d7c-config-volume\") pod \"collect-profiles-29497275-dzs5b\" (UID: \"5915d8a1-8561-481b-990d-60cd35f30d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.291305 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9607a267-53c0-4432-b3aa-dd7d0e04ba77-config-volume\") pod \"dns-default-hfmnk\" (UID: \"9607a267-53c0-4432-b3aa-dd7d0e04ba77\") " pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.291482 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkldv\" (UniqueName: \"kubernetes.io/projected/5915d8a1-8561-481b-990d-60cd35f30d7c-kube-api-access-tkldv\") pod \"collect-profiles-29497275-dzs5b\" (UID: \"5915d8a1-8561-481b-990d-60cd35f30d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.291553 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-registration-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.291591 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e79b29-fca8-4411-b6c8-478630090b03-config\") pod \"service-ca-operator-777779d784-8mpjm\" (UID: \"65e79b29-fca8-4411-b6c8-478630090b03\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.291794 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e4ab724a-e633-4639-938c-317c550ba114-cert\") pod \"ingress-canary-xkv6l\" (UID: \"e4ab724a-e633-4639-938c-317c550ba114\") " pod="openshift-ingress-canary/ingress-canary-xkv6l" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.291812 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-mountpoint-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.291851 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/576a09ad-58f8-4eb7-9d08-0e7183a4996b-certs\") pod \"machine-config-server-4gkcm\" (UID: \"576a09ad-58f8-4eb7-9d08-0e7183a4996b\") " pod="openshift-machine-config-operator/machine-config-server-4gkcm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.291868 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7b4f1dd9-6c94-4551-a88f-2b83c154962a-signing-cabundle\") pod \"service-ca-9c57cc56f-blplz\" (UID: \"7b4f1dd9-6c94-4551-a88f-2b83c154962a\") " pod="openshift-service-ca/service-ca-9c57cc56f-blplz" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.291918 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q56h\" (UniqueName: \"kubernetes.io/projected/199faf50-26fd-485c-871d-c4a2d9cc33e6-kube-api-access-5q56h\") pod \"catalog-operator-68c6474976-kn2nd\" (UID: \"199faf50-26fd-485c-871d-c4a2d9cc33e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.292018 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qpn9\" (UniqueName: \"kubernetes.io/projected/65e79b29-fca8-4411-b6c8-478630090b03-kube-api-access-2qpn9\") pod \"service-ca-operator-777779d784-8mpjm\" (UID: \"65e79b29-fca8-4411-b6c8-478630090b03\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.292042 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftw82\" (UniqueName: \"kubernetes.io/projected/a8c36ad8-2c55-41d9-8bcc-8accc3501626-kube-api-access-ftw82\") pod \"marketplace-operator-79b997595-g9jhn\" (UID: \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.292092 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kh2k\" (UniqueName: \"kubernetes.io/projected/7b4f1dd9-6c94-4551-a88f-2b83c154962a-kube-api-access-4kh2k\") pod \"service-ca-9c57cc56f-blplz\" (UID: \"7b4f1dd9-6c94-4551-a88f-2b83c154962a\") " pod="openshift-service-ca/service-ca-9c57cc56f-blplz" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.292109 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd74t\" (UniqueName: \"kubernetes.io/projected/57e06675-0696-4d1e-9058-920532a96cdf-kube-api-access-gd74t\") pod \"machine-config-controller-84d6567774-p776r\" (UID: \"57e06675-0696-4d1e-9058-920532a96cdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.292124 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-plugins-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.292178 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e79b29-fca8-4411-b6c8-478630090b03-serving-cert\") pod \"service-ca-operator-777779d784-8mpjm\" (UID: \"65e79b29-fca8-4411-b6c8-478630090b03\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.292193 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9291437-d019-4d82-99b5-6b7322ea1750-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fhsww\" (UID: \"a9291437-d019-4d82-99b5-6b7322ea1750\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.292251 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/842445b4-a5eb-48a6-b6c7-ba426d8fab6c-webhook-cert\") pod \"packageserver-d55dfcdfc-ddnzh\" (UID: \"842445b4-a5eb-48a6-b6c7-ba426d8fab6c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.292426 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-socket-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.292467 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6glb\" (UniqueName: \"kubernetes.io/projected/a9291437-d019-4d82-99b5-6b7322ea1750-kube-api-access-f6glb\") pod \"olm-operator-6b444d44fb-fhsww\" (UID: \"a9291437-d019-4d82-99b5-6b7322ea1750\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.293953 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c51760c2-79c8-4d25-99a6-bfb51d768be8-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mwtvl\" (UID: \"c51760c2-79c8-4d25-99a6-bfb51d768be8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mwtvl" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.294521 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-registry-tls\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.298278 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82582675-89e4-4783-84df-ea11774c62aa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.301347 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wkxcn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.305816 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ln492"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.325802 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccgnr\" (UniqueName: \"kubernetes.io/projected/c51760c2-79c8-4d25-99a6-bfb51d768be8-kube-api-access-ccgnr\") pod \"multus-admission-controller-857f4d67dd-mwtvl\" (UID: \"c51760c2-79c8-4d25-99a6-bfb51d768be8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mwtvl" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.338286 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.345483 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-bound-sa-token\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.363027 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x74c\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-kube-api-access-7x74c\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.377646 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.377847 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mwtvl" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393183 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393399 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e79b29-fca8-4411-b6c8-478630090b03-serving-cert\") pod \"service-ca-operator-777779d784-8mpjm\" (UID: \"65e79b29-fca8-4411-b6c8-478630090b03\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393426 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9291437-d019-4d82-99b5-6b7322ea1750-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fhsww\" (UID: \"a9291437-d019-4d82-99b5-6b7322ea1750\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393443 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/842445b4-a5eb-48a6-b6c7-ba426d8fab6c-webhook-cert\") pod \"packageserver-d55dfcdfc-ddnzh\" (UID: \"842445b4-a5eb-48a6-b6c7-ba426d8fab6c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393460 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-socket-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393549 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6glb\" (UniqueName: \"kubernetes.io/projected/a9291437-d019-4d82-99b5-6b7322ea1750-kube-api-access-f6glb\") pod \"olm-operator-6b444d44fb-fhsww\" (UID: \"a9291437-d019-4d82-99b5-6b7322ea1750\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393567 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/576a09ad-58f8-4eb7-9d08-0e7183a4996b-node-bootstrap-token\") pod \"machine-config-server-4gkcm\" (UID: \"576a09ad-58f8-4eb7-9d08-0e7183a4996b\") " pod="openshift-machine-config-operator/machine-config-server-4gkcm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393590 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef79cfe6-00eb-40e7-941f-4013514c4fd2-proxy-tls\") pod \"machine-config-operator-74547568cd-gcjhn\" (UID: \"ef79cfe6-00eb-40e7-941f-4013514c4fd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393605 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jl7b\" (UniqueName: \"kubernetes.io/projected/9607a267-53c0-4432-b3aa-dd7d0e04ba77-kube-api-access-2jl7b\") pod \"dns-default-hfmnk\" (UID: \"9607a267-53c0-4432-b3aa-dd7d0e04ba77\") " pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393622 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/199faf50-26fd-485c-871d-c4a2d9cc33e6-srv-cert\") pod \"catalog-operator-68c6474976-kn2nd\" (UID: \"199faf50-26fd-485c-871d-c4a2d9cc33e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393637 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-csi-data-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393653 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/842445b4-a5eb-48a6-b6c7-ba426d8fab6c-tmpfs\") pod \"packageserver-d55dfcdfc-ddnzh\" (UID: \"842445b4-a5eb-48a6-b6c7-ba426d8fab6c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393670 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krfjg\" (UniqueName: \"kubernetes.io/projected/576a09ad-58f8-4eb7-9d08-0e7183a4996b-kube-api-access-krfjg\") pod \"machine-config-server-4gkcm\" (UID: \"576a09ad-58f8-4eb7-9d08-0e7183a4996b\") " pod="openshift-machine-config-operator/machine-config-server-4gkcm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393689 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24krw\" (UniqueName: \"kubernetes.io/projected/8a22853a-72dd-48ac-aca9-1761185740ba-kube-api-access-24krw\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393703 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9607a267-53c0-4432-b3aa-dd7d0e04ba77-metrics-tls\") pod \"dns-default-hfmnk\" (UID: \"9607a267-53c0-4432-b3aa-dd7d0e04ba77\") " pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393727 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef79cfe6-00eb-40e7-941f-4013514c4fd2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gcjhn\" (UID: \"ef79cfe6-00eb-40e7-941f-4013514c4fd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393744 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/842445b4-a5eb-48a6-b6c7-ba426d8fab6c-apiservice-cert\") pod \"packageserver-d55dfcdfc-ddnzh\" (UID: \"842445b4-a5eb-48a6-b6c7-ba426d8fab6c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393763 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/199faf50-26fd-485c-871d-c4a2d9cc33e6-profile-collector-cert\") pod \"catalog-operator-68c6474976-kn2nd\" (UID: \"199faf50-26fd-485c-871d-c4a2d9cc33e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393781 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8c36ad8-2c55-41d9-8bcc-8accc3501626-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g9jhn\" (UID: \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393797 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7b4f1dd9-6c94-4551-a88f-2b83c154962a-signing-key\") pod \"service-ca-9c57cc56f-blplz\" (UID: \"7b4f1dd9-6c94-4551-a88f-2b83c154962a\") " pod="openshift-service-ca/service-ca-9c57cc56f-blplz" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393815 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5915d8a1-8561-481b-990d-60cd35f30d7c-secret-volume\") pod \"collect-profiles-29497275-dzs5b\" (UID: \"5915d8a1-8561-481b-990d-60cd35f30d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393831 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8c36ad8-2c55-41d9-8bcc-8accc3501626-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g9jhn\" (UID: \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393846 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ef79cfe6-00eb-40e7-941f-4013514c4fd2-images\") pod \"machine-config-operator-74547568cd-gcjhn\" (UID: \"ef79cfe6-00eb-40e7-941f-4013514c4fd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393861 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57e06675-0696-4d1e-9058-920532a96cdf-proxy-tls\") pod \"machine-config-controller-84d6567774-p776r\" (UID: \"57e06675-0696-4d1e-9058-920532a96cdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393879 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsm64\" (UniqueName: \"kubernetes.io/projected/e4ab724a-e633-4639-938c-317c550ba114-kube-api-access-tsm64\") pod \"ingress-canary-xkv6l\" (UID: \"e4ab724a-e633-4639-938c-317c550ba114\") " pod="openshift-ingress-canary/ingress-canary-xkv6l" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393896 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/43fee678-54c4-48f9-a194-720209531460-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wq7pt\" (UID: \"43fee678-54c4-48f9-a194-720209531460\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393922 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2222k\" (UniqueName: \"kubernetes.io/projected/ef79cfe6-00eb-40e7-941f-4013514c4fd2-kube-api-access-2222k\") pod \"machine-config-operator-74547568cd-gcjhn\" (UID: \"ef79cfe6-00eb-40e7-941f-4013514c4fd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393954 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtxsm\" (UniqueName: \"kubernetes.io/projected/842445b4-a5eb-48a6-b6c7-ba426d8fab6c-kube-api-access-gtxsm\") pod \"packageserver-d55dfcdfc-ddnzh\" (UID: \"842445b4-a5eb-48a6-b6c7-ba426d8fab6c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.393992 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9291437-d019-4d82-99b5-6b7322ea1750-srv-cert\") pod \"olm-operator-6b444d44fb-fhsww\" (UID: \"a9291437-d019-4d82-99b5-6b7322ea1750\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394018 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57e06675-0696-4d1e-9058-920532a96cdf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p776r\" (UID: \"57e06675-0696-4d1e-9058-920532a96cdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394034 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5915d8a1-8561-481b-990d-60cd35f30d7c-config-volume\") pod \"collect-profiles-29497275-dzs5b\" (UID: \"5915d8a1-8561-481b-990d-60cd35f30d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394052 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzd47\" (UniqueName: \"kubernetes.io/projected/43fee678-54c4-48f9-a194-720209531460-kube-api-access-jzd47\") pod \"package-server-manager-789f6589d5-wq7pt\" (UID: \"43fee678-54c4-48f9-a194-720209531460\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394069 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkldv\" (UniqueName: \"kubernetes.io/projected/5915d8a1-8561-481b-990d-60cd35f30d7c-kube-api-access-tkldv\") pod \"collect-profiles-29497275-dzs5b\" (UID: \"5915d8a1-8561-481b-990d-60cd35f30d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394089 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9607a267-53c0-4432-b3aa-dd7d0e04ba77-config-volume\") pod \"dns-default-hfmnk\" (UID: \"9607a267-53c0-4432-b3aa-dd7d0e04ba77\") " pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394106 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-registration-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394121 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e79b29-fca8-4411-b6c8-478630090b03-config\") pod \"service-ca-operator-777779d784-8mpjm\" (UID: \"65e79b29-fca8-4411-b6c8-478630090b03\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394139 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-mountpoint-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394153 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e4ab724a-e633-4639-938c-317c550ba114-cert\") pod \"ingress-canary-xkv6l\" (UID: \"e4ab724a-e633-4639-938c-317c550ba114\") " pod="openshift-ingress-canary/ingress-canary-xkv6l" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394168 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7b4f1dd9-6c94-4551-a88f-2b83c154962a-signing-cabundle\") pod \"service-ca-9c57cc56f-blplz\" (UID: \"7b4f1dd9-6c94-4551-a88f-2b83c154962a\") " pod="openshift-service-ca/service-ca-9c57cc56f-blplz" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394183 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/576a09ad-58f8-4eb7-9d08-0e7183a4996b-certs\") pod \"machine-config-server-4gkcm\" (UID: \"576a09ad-58f8-4eb7-9d08-0e7183a4996b\") " pod="openshift-machine-config-operator/machine-config-server-4gkcm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394199 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qpn9\" (UniqueName: \"kubernetes.io/projected/65e79b29-fca8-4411-b6c8-478630090b03-kube-api-access-2qpn9\") pod \"service-ca-operator-777779d784-8mpjm\" (UID: \"65e79b29-fca8-4411-b6c8-478630090b03\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394216 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q56h\" (UniqueName: \"kubernetes.io/projected/199faf50-26fd-485c-871d-c4a2d9cc33e6-kube-api-access-5q56h\") pod \"catalog-operator-68c6474976-kn2nd\" (UID: \"199faf50-26fd-485c-871d-c4a2d9cc33e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394234 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftw82\" (UniqueName: \"kubernetes.io/projected/a8c36ad8-2c55-41d9-8bcc-8accc3501626-kube-api-access-ftw82\") pod \"marketplace-operator-79b997595-g9jhn\" (UID: \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394255 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kh2k\" (UniqueName: \"kubernetes.io/projected/7b4f1dd9-6c94-4551-a88f-2b83c154962a-kube-api-access-4kh2k\") pod \"service-ca-9c57cc56f-blplz\" (UID: \"7b4f1dd9-6c94-4551-a88f-2b83c154962a\") " pod="openshift-service-ca/service-ca-9c57cc56f-blplz" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394271 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd74t\" (UniqueName: \"kubernetes.io/projected/57e06675-0696-4d1e-9058-920532a96cdf-kube-api-access-gd74t\") pod \"machine-config-controller-84d6567774-p776r\" (UID: \"57e06675-0696-4d1e-9058-920532a96cdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394287 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-plugins-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394523 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-plugins-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394547 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-csi-data-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: E0131 05:23:38.394796 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:38.894777405 +0000 UTC m=+143.943939001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.394849 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-socket-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.395718 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57e06675-0696-4d1e-9058-920532a96cdf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p776r\" (UID: \"57e06675-0696-4d1e-9058-920532a96cdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.396096 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9607a267-53c0-4432-b3aa-dd7d0e04ba77-config-volume\") pod \"dns-default-hfmnk\" (UID: \"9607a267-53c0-4432-b3aa-dd7d0e04ba77\") " pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.397509 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7b4f1dd9-6c94-4551-a88f-2b83c154962a-signing-cabundle\") pod \"service-ca-9c57cc56f-blplz\" (UID: \"7b4f1dd9-6c94-4551-a88f-2b83c154962a\") " pod="openshift-service-ca/service-ca-9c57cc56f-blplz" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.397784 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8c36ad8-2c55-41d9-8bcc-8accc3501626-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g9jhn\" (UID: \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.398802 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef79cfe6-00eb-40e7-941f-4013514c4fd2-proxy-tls\") pod \"machine-config-operator-74547568cd-gcjhn\" (UID: \"ef79cfe6-00eb-40e7-941f-4013514c4fd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.399204 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e79b29-fca8-4411-b6c8-478630090b03-serving-cert\") pod \"service-ca-operator-777779d784-8mpjm\" (UID: \"65e79b29-fca8-4411-b6c8-478630090b03\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.399266 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/199faf50-26fd-485c-871d-c4a2d9cc33e6-srv-cert\") pod \"catalog-operator-68c6474976-kn2nd\" (UID: \"199faf50-26fd-485c-871d-c4a2d9cc33e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.399717 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-mountpoint-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.399817 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e79b29-fca8-4411-b6c8-478630090b03-config\") pod \"service-ca-operator-777779d784-8mpjm\" (UID: \"65e79b29-fca8-4411-b6c8-478630090b03\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.399915 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8a22853a-72dd-48ac-aca9-1761185740ba-registration-dir\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.400316 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5915d8a1-8561-481b-990d-60cd35f30d7c-config-volume\") pod \"collect-profiles-29497275-dzs5b\" (UID: \"5915d8a1-8561-481b-990d-60cd35f30d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.400624 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e4ab724a-e633-4639-938c-317c550ba114-cert\") pod \"ingress-canary-xkv6l\" (UID: \"e4ab724a-e633-4639-938c-317c550ba114\") " pod="openshift-ingress-canary/ingress-canary-xkv6l" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.400982 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9607a267-53c0-4432-b3aa-dd7d0e04ba77-metrics-tls\") pod \"dns-default-hfmnk\" (UID: \"9607a267-53c0-4432-b3aa-dd7d0e04ba77\") " pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.401387 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/576a09ad-58f8-4eb7-9d08-0e7183a4996b-node-bootstrap-token\") pod \"machine-config-server-4gkcm\" (UID: \"576a09ad-58f8-4eb7-9d08-0e7183a4996b\") " pod="openshift-machine-config-operator/machine-config-server-4gkcm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.402240 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef79cfe6-00eb-40e7-941f-4013514c4fd2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gcjhn\" (UID: \"ef79cfe6-00eb-40e7-941f-4013514c4fd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.402256 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ef79cfe6-00eb-40e7-941f-4013514c4fd2-images\") pod \"machine-config-operator-74547568cd-gcjhn\" (UID: \"ef79cfe6-00eb-40e7-941f-4013514c4fd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.402411 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/842445b4-a5eb-48a6-b6c7-ba426d8fab6c-tmpfs\") pod \"packageserver-d55dfcdfc-ddnzh\" (UID: \"842445b4-a5eb-48a6-b6c7-ba426d8fab6c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.402687 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/842445b4-a5eb-48a6-b6c7-ba426d8fab6c-webhook-cert\") pod \"packageserver-d55dfcdfc-ddnzh\" (UID: \"842445b4-a5eb-48a6-b6c7-ba426d8fab6c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.403940 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5915d8a1-8561-481b-990d-60cd35f30d7c-secret-volume\") pod \"collect-profiles-29497275-dzs5b\" (UID: \"5915d8a1-8561-481b-990d-60cd35f30d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.404830 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57e06675-0696-4d1e-9058-920532a96cdf-proxy-tls\") pod \"machine-config-controller-84d6567774-p776r\" (UID: \"57e06675-0696-4d1e-9058-920532a96cdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.406915 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9291437-d019-4d82-99b5-6b7322ea1750-srv-cert\") pod \"olm-operator-6b444d44fb-fhsww\" (UID: \"a9291437-d019-4d82-99b5-6b7322ea1750\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.407105 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/576a09ad-58f8-4eb7-9d08-0e7183a4996b-certs\") pod \"machine-config-server-4gkcm\" (UID: \"576a09ad-58f8-4eb7-9d08-0e7183a4996b\") " pod="openshift-machine-config-operator/machine-config-server-4gkcm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.407296 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/842445b4-a5eb-48a6-b6c7-ba426d8fab6c-apiservice-cert\") pod \"packageserver-d55dfcdfc-ddnzh\" (UID: \"842445b4-a5eb-48a6-b6c7-ba426d8fab6c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.411000 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9291437-d019-4d82-99b5-6b7322ea1750-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fhsww\" (UID: \"a9291437-d019-4d82-99b5-6b7322ea1750\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.411938 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/43fee678-54c4-48f9-a194-720209531460-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wq7pt\" (UID: \"43fee678-54c4-48f9-a194-720209531460\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.412114 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8c36ad8-2c55-41d9-8bcc-8accc3501626-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g9jhn\" (UID: \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.412377 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/199faf50-26fd-485c-871d-c4a2d9cc33e6-profile-collector-cert\") pod \"catalog-operator-68c6474976-kn2nd\" (UID: \"199faf50-26fd-485c-871d-c4a2d9cc33e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.415544 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7b4f1dd9-6c94-4551-a88f-2b83c154962a-signing-key\") pod \"service-ca-9c57cc56f-blplz\" (UID: \"7b4f1dd9-6c94-4551-a88f-2b83c154962a\") " pod="openshift-service-ca/service-ca-9c57cc56f-blplz" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.454183 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24krw\" (UniqueName: \"kubernetes.io/projected/8a22853a-72dd-48ac-aca9-1761185740ba-kube-api-access-24krw\") pod \"csi-hostpathplugin-pqgfr\" (UID: \"8a22853a-72dd-48ac-aca9-1761185740ba\") " pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.463272 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jl7b\" (UniqueName: \"kubernetes.io/projected/9607a267-53c0-4432-b3aa-dd7d0e04ba77-kube-api-access-2jl7b\") pod \"dns-default-hfmnk\" (UID: \"9607a267-53c0-4432-b3aa-dd7d0e04ba77\") " pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.491363 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krfjg\" (UniqueName: \"kubernetes.io/projected/576a09ad-58f8-4eb7-9d08-0e7183a4996b-kube-api-access-krfjg\") pod \"machine-config-server-4gkcm\" (UID: \"576a09ad-58f8-4eb7-9d08-0e7183a4996b\") " pod="openshift-machine-config-operator/machine-config-server-4gkcm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.495216 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: E0131 05:23:38.495552 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:38.995539918 +0000 UTC m=+144.044701514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.506240 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6glb\" (UniqueName: \"kubernetes.io/projected/a9291437-d019-4d82-99b5-6b7322ea1750-kube-api-access-f6glb\") pod \"olm-operator-6b444d44fb-fhsww\" (UID: \"a9291437-d019-4d82-99b5-6b7322ea1750\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.517180 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.528642 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzd47\" (UniqueName: \"kubernetes.io/projected/43fee678-54c4-48f9-a194-720209531460-kube-api-access-jzd47\") pod \"package-server-manager-789f6589d5-wq7pt\" (UID: \"43fee678-54c4-48f9-a194-720209531460\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.543618 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkldv\" (UniqueName: \"kubernetes.io/projected/5915d8a1-8561-481b-990d-60cd35f30d7c-kube-api-access-tkldv\") pod \"collect-profiles-29497275-dzs5b\" (UID: \"5915d8a1-8561-481b-990d-60cd35f30d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.570574 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.573259 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kh2k\" (UniqueName: \"kubernetes.io/projected/7b4f1dd9-6c94-4551-a88f-2b83c154962a-kube-api-access-4kh2k\") pod \"service-ca-9c57cc56f-blplz\" (UID: \"7b4f1dd9-6c94-4551-a88f-2b83c154962a\") " pod="openshift-service-ca/service-ca-9c57cc56f-blplz" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.573770 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.587131 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftw82\" (UniqueName: \"kubernetes.io/projected/a8c36ad8-2c55-41d9-8bcc-8accc3501626-kube-api-access-ftw82\") pod \"marketplace-operator-79b997595-g9jhn\" (UID: \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.596159 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:38 crc kubenswrapper[5050]: E0131 05:23:38.596307 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:39.096283031 +0000 UTC m=+144.145444627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.596892 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: E0131 05:23:38.597225 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:39.097214477 +0000 UTC m=+144.146376073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.604489 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2222k\" (UniqueName: \"kubernetes.io/projected/ef79cfe6-00eb-40e7-941f-4013514c4fd2-kube-api-access-2222k\") pod \"machine-config-operator-74547568cd-gcjhn\" (UID: \"ef79cfe6-00eb-40e7-941f-4013514c4fd2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: W0131 05:23:38.613886 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e46f669_9cbc_482c_a124_17a007b3f203.slice/crio-d19a6d2e6c119a55718690c29db4de10e3409d2c71129ed19eb8132da20ba196 WatchSource:0}: Error finding container d19a6d2e6c119a55718690c29db4de10e3409d2c71129ed19eb8132da20ba196: Status 404 returned error can't find the container with id d19a6d2e6c119a55718690c29db4de10e3409d2c71129ed19eb8132da20ba196 Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.625876 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q56h\" (UniqueName: \"kubernetes.io/projected/199faf50-26fd-485c-871d-c4a2d9cc33e6-kube-api-access-5q56h\") pod \"catalog-operator-68c6474976-kn2nd\" (UID: \"199faf50-26fd-485c-871d-c4a2d9cc33e6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.645397 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mwtvl"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.647580 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsm64\" (UniqueName: \"kubernetes.io/projected/e4ab724a-e633-4639-938c-317c550ba114-kube-api-access-tsm64\") pod \"ingress-canary-xkv6l\" (UID: \"e4ab724a-e633-4639-938c-317c550ba114\") " pod="openshift-ingress-canary/ingress-canary-xkv6l" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.672986 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd74t\" (UniqueName: \"kubernetes.io/projected/57e06675-0696-4d1e-9058-920532a96cdf-kube-api-access-gd74t\") pod \"machine-config-controller-84d6567774-p776r\" (UID: \"57e06675-0696-4d1e-9058-920532a96cdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.675033 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" event={"ID":"e1c8049f-1b60-4e5c-a547-df42a78a841e","Type":"ContainerStarted","Data":"70f10bcf1e6bc8a76b4c7672ee86de0a24c0895a8819ecbd5b10b47c7af5a896"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.675063 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" event={"ID":"e1c8049f-1b60-4e5c-a547-df42a78a841e","Type":"ContainerStarted","Data":"e1853c7f3ef6879c09518598754b6ed9ad18ff7b6f4b04c58de09fb6515f90fb"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.679847 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.684926 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.685590 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v7nml" event={"ID":"91e87770-5e80-48f8-b274-31b0399b9935","Type":"ContainerDied","Data":"46ce0eab57d22c314814f7950f38b6c76f66f3e79b1b2545e2350a42dd698193"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.686285 5050 generic.go:334] "Generic (PLEG): container finished" podID="91e87770-5e80-48f8-b274-31b0399b9935" containerID="46ce0eab57d22c314814f7950f38b6c76f66f3e79b1b2545e2350a42dd698193" exitCode=0 Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.688414 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v7nml" event={"ID":"91e87770-5e80-48f8-b274-31b0399b9935","Type":"ContainerStarted","Data":"30113fdebb23d57ad7894f99dbed3815e8256d36b9b4925738d539b2b9ce2bd3"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.692477 5050 generic.go:334] "Generic (PLEG): container finished" podID="e4380fc4-40ae-4321-bd83-5dce3d68fbae" containerID="ceee99bda75dca516a841f2db7f56bf846068b2eca2e6beaf8d41c6dd27ec7de" exitCode=0 Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.692567 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" event={"ID":"e4380fc4-40ae-4321-bd83-5dce3d68fbae","Type":"ContainerDied","Data":"ceee99bda75dca516a841f2db7f56bf846068b2eca2e6beaf8d41c6dd27ec7de"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.693814 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-87m8f" event={"ID":"e458d0aa-1771-4429-ba32-39cc22f3d638","Type":"ContainerStarted","Data":"0133fb321284e4c972677b99039f1961ec32b1bd55eeba32ac8878672bddddb3"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.693841 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-87m8f" event={"ID":"e458d0aa-1771-4429-ba32-39cc22f3d638","Type":"ContainerStarted","Data":"53f7b6a12c3ab33f84a13a43a76e1a1790a36e53c7c2672f5af8ee8402c7830a"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.695312 5050 generic.go:334] "Generic (PLEG): container finished" podID="83e6fe13-8779-4d8b-998e-75f7b39ea426" containerID="0b85808a50d70a413f64b002506152a878cd5b7d9174080a494d1bd00586ec0d" exitCode=0 Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.695345 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" event={"ID":"83e6fe13-8779-4d8b-998e-75f7b39ea426","Type":"ContainerDied","Data":"0b85808a50d70a413f64b002506152a878cd5b7d9174080a494d1bd00586ec0d"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.697154 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qpn9\" (UniqueName: \"kubernetes.io/projected/65e79b29-fca8-4411-b6c8-478630090b03-kube-api-access-2qpn9\") pod \"service-ca-operator-777779d784-8mpjm\" (UID: \"65e79b29-fca8-4411-b6c8-478630090b03\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.697406 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:38 crc kubenswrapper[5050]: E0131 05:23:38.697539 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:39.197525342 +0000 UTC m=+144.246686928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.697647 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: E0131 05:23:38.698022 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:39.198015491 +0000 UTC m=+144.247177087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.698092 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" event={"ID":"41702b15-de0c-4d6d-8096-4a86ab88d33d","Type":"ContainerStarted","Data":"bbd5a137cf89a4e66d1f64176e918bb4a03a24015e2cf18ea0d3dfad28a0cdc8"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.698886 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s" event={"ID":"2db7527f-a8bb-431d-ab1c-32c2278822aa","Type":"ContainerStarted","Data":"9f07243ea4d673a59e917fa5fe502832487613f022cc2113f1bb2d1c4ca46143"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.699857 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" event={"ID":"cef45bcb-8e16-4f2b-95ce-0363efb53d7f","Type":"ContainerStarted","Data":"30cb2495f20e8302fda65c9470c88df16846eec1a6be3411b539924ef520ce6e"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.700869 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2bmrg" event={"ID":"066f98b0-80a0-4cdd-ada3-76a1ebab23de","Type":"ContainerStarted","Data":"928c0f53254f1bcd0eacb3acbaa70c7fd64d82ea2b87de8a8f98c576bbd9537b"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.701169 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.701246 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2bmrg" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.702821 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" event={"ID":"85a5692d-28e5-45cd-85db-ba1dcef92b58","Type":"ContainerStarted","Data":"64bc90f6655715b22af6501a5bba507011f7607eee52abbfff6560aab1c49400"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.703248 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.703870 5050 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bmrg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.703909 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2bmrg" podUID="066f98b0-80a0-4cdd-ada3-76a1ebab23de" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.704050 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" event={"ID":"d152ed50-3f92-49c8-80cc-e73e4046259e","Type":"ContainerStarted","Data":"f09019a2bf0d8455f6ff986bbc366a72f0cde16690da6330ba6d96369f3d41f7"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.704077 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" event={"ID":"d152ed50-3f92-49c8-80cc-e73e4046259e","Type":"ContainerStarted","Data":"1b6879e61c747b22a0388cdbaba0599315fbec08aa873d3025d8e1f844d00098"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.704352 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zkmjw"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.704495 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtxsm\" (UniqueName: \"kubernetes.io/projected/842445b4-a5eb-48a6-b6c7-ba426d8fab6c-kube-api-access-gtxsm\") pod \"packageserver-d55dfcdfc-ddnzh\" (UID: \"842445b4-a5eb-48a6-b6c7-ba426d8fab6c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.705081 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" event={"ID":"24256d22-e420-4442-9064-af05c357a072","Type":"ContainerStarted","Data":"f03c62dbf9091c7eda296246c7cb274886a804bfd6ccbd2a0b0e3821fa80c089"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.708155 5050 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vszrj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.708205 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" podUID="85a5692d-28e5-45cd-85db-ba1dcef92b58" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.709514 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.717749 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.724666 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.731515 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" event={"ID":"784e6114-262f-4937-831c-d16945f48683","Type":"ContainerStarted","Data":"e6cab2ab69f97c3a264bc6748f3ff022c58a87c0e1c244b641edd8fd096e753d"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.731555 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" event={"ID":"784e6114-262f-4937-831c-d16945f48683","Type":"ContainerStarted","Data":"b32b0b5b9088b922db34bd6267ad0eeb19600e6531620484b3bffd22800c0b6f"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.731563 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" event={"ID":"784e6114-262f-4937-831c-d16945f48683","Type":"ContainerStarted","Data":"00ad94dbbddea5c95af94fc9ff198cab1bf2b1f7d574d7435de3f77848dc09e7"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.734493 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.743280 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.746832 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.751568 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-blplz" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.758371 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.759183 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fk4vq" event={"ID":"dab2d02c-8e81-40c5-a5ca-98be1833702e","Type":"ContainerStarted","Data":"ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.762324 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.766683 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.768313 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" event={"ID":"f221629d-987d-49fe-bcaf-2708f516eec8","Type":"ContainerStarted","Data":"620fadb6f1b6928c218085b38b55a357a568d302316cfcdb44cb55867adab02e"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.770433 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" event={"ID":"1e46f669-9cbc-482c-a124-17a007b3f203","Type":"ContainerStarted","Data":"d19a6d2e6c119a55718690c29db4de10e3409d2c71129ed19eb8132da20ba196"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.778835 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xkv6l" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.780115 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" event={"ID":"35ac638f-6298-404b-9503-ac7f3aa58e4d","Type":"ContainerStarted","Data":"5da1b3e5666c870fef6d347e3b77dfb1548bcea967bc2513dc6be67ee4d79c24"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.786935 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4gkcm" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.788667 5050 patch_prober.go:28] interesting pod/console-operator-58897d9998-tkrtm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.788699 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tkrtm" podUID="59157317-ce37-4d74-b7b5-6495704e3571" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.789066 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54" event={"ID":"312df477-54e5-4ebc-bde0-ec291393ece9","Type":"ContainerStarted","Data":"db66d6b554cd705cfa201a164820bd85d7882b4dd14c580e862294c6e043ffe1"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.789091 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54" event={"ID":"312df477-54e5-4ebc-bde0-ec291393ece9","Type":"ContainerStarted","Data":"574ef710893edbda081dd4cd4fd4f58cc07c112c0df0b8cedb92e3c682561b94"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.789101 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54" event={"ID":"312df477-54e5-4ebc-bde0-ec291393ece9","Type":"ContainerStarted","Data":"c38354211477c73c1e2df5a255c75e8e75f02f9fed958c13d9c65ced8a8b11ee"} Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.799252 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:38 crc kubenswrapper[5050]: E0131 05:23:38.802744 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:39.302726603 +0000 UTC m=+144.351888199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.815910 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pqgfr"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.831275 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wkxcn"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.887116 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx"] Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.904142 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:38 crc kubenswrapper[5050]: E0131 05:23:38.905003 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:39.404989954 +0000 UTC m=+144.454151550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:38 crc kubenswrapper[5050]: W0131 05:23:38.960366 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a22853a_72dd_48ac_aca9_1761185740ba.slice/crio-e7b40c9ecc7d66e22227fbe6c7ce2b6dbe676df96138398489f3d4449f9cabcc WatchSource:0}: Error finding container e7b40c9ecc7d66e22227fbe6c7ce2b6dbe676df96138398489f3d4449f9cabcc: Status 404 returned error can't find the container with id e7b40c9ecc7d66e22227fbe6c7ce2b6dbe676df96138398489f3d4449f9cabcc Jan 31 05:23:38 crc kubenswrapper[5050]: I0131 05:23:38.992654 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.004844 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.005841 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.006035 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:39.506008537 +0000 UTC m=+144.555170133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.006186 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.006493 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:39.506481445 +0000 UTC m=+144.555643041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.007020 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.007042 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.021179 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.021570 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.107225 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.107655 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:39.60758214 +0000 UTC m=+144.656743746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.138770 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g9jhn"] Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.146300 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn"] Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.209247 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.209643 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:39.709627152 +0000 UTC m=+144.758788748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.231354 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b"] Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.249848 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd"] Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.266094 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bgqwp" podStartSLOduration=124.266079709 podStartE2EDuration="2m4.266079709s" podCreationTimestamp="2026-01-31 05:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:39.264060213 +0000 UTC m=+144.313221819" watchObservedRunningTime="2026-01-31 05:23:39.266079709 +0000 UTC m=+144.315241305" Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.280395 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww"] Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.310149 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.310460 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:39.810446637 +0000 UTC m=+144.859608223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: W0131 05:23:39.312538 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8c36ad8_2c55_41d9_8bcc_8accc3501626.slice/crio-f6b330752c43e715b80cef783450ac191c381094c08068a01ca8875b3b943a5b WatchSource:0}: Error finding container f6b330752c43e715b80cef783450ac191c381094c08068a01ca8875b3b943a5b: Status 404 returned error can't find the container with id f6b330752c43e715b80cef783450ac191c381094c08068a01ca8875b3b943a5b Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.316794 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt"] Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.360609 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm"] Jan 31 05:23:39 crc kubenswrapper[5050]: W0131 05:23:39.406871 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65e79b29_fca8_4411_b6c8_478630090b03.slice/crio-29854b158fc30ffacefd6ef81f923a4f16fcdf52ef846fb44157931b008b88d4 WatchSource:0}: Error finding container 29854b158fc30ffacefd6ef81f923a4f16fcdf52ef846fb44157931b008b88d4: Status 404 returned error can't find the container with id 29854b158fc30ffacefd6ef81f923a4f16fcdf52ef846fb44157931b008b88d4 Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.412712 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.413045 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:39.91303495 +0000 UTC m=+144.962196536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.514336 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.514572 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.014548272 +0000 UTC m=+145.063709868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.514720 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.515058 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.01504708 +0000 UTC m=+145.064208666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.617859 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.618224 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.118198604 +0000 UTC m=+145.167360200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.618476 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.618776 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.118770356 +0000 UTC m=+145.167931952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.722824 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.723394 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-blplz"] Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.723602 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.223582103 +0000 UTC m=+145.272743699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.745408 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-tkrtm" podStartSLOduration=123.745392683 podStartE2EDuration="2m3.745392683s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:39.745233227 +0000 UTC m=+144.794394823" watchObservedRunningTime="2026-01-31 05:23:39.745392683 +0000 UTC m=+144.794554279" Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.764132 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hfmnk"] Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.804488 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh"] Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.826600 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.826887 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.326875213 +0000 UTC m=+145.376036809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.833964 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xkv6l"] Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.858702 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wkxcn" event={"ID":"928c5e09-96f9-4f04-b797-e23c1efa1bcf","Type":"ContainerStarted","Data":"a44a96c2abefd7a1731d51ae40d42d4b982f3fedaabe695251a32ae27ac59f2c"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.859502 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" event={"ID":"ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02","Type":"ContainerStarted","Data":"efd8bea534239e70bef10e6b440a176eb03a69351ecf578027726d00f35aa324"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.869231 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" event={"ID":"5915d8a1-8561-481b-990d-60cd35f30d7c","Type":"ContainerStarted","Data":"f25268100efa96394f2d9e9a22ec15e5e660ca841a6d449e582bfc19ed1921e8"} Jan 31 05:23:39 crc kubenswrapper[5050]: W0131 05:23:39.873777 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9607a267_53c0_4432_b3aa_dd7d0e04ba77.slice/crio-52e6b0378e176b2a5f687ad093eb347c8c2c55d0812c08915136469378a2b06c WatchSource:0}: Error finding container 52e6b0378e176b2a5f687ad093eb347c8c2c55d0812c08915136469378a2b06c: Status 404 returned error can't find the container with id 52e6b0378e176b2a5f687ad093eb347c8c2c55d0812c08915136469378a2b06c Jan 31 05:23:39 crc kubenswrapper[5050]: W0131 05:23:39.879041 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b4f1dd9_6c94_4551_a88f_2b83c154962a.slice/crio-dbbd633bec1de7f369fed42ff129bf15a0950f48e847952d01eee4abe1a27cd5 WatchSource:0}: Error finding container dbbd633bec1de7f369fed42ff129bf15a0950f48e847952d01eee4abe1a27cd5: Status 404 returned error can't find the container with id dbbd633bec1de7f369fed42ff129bf15a0950f48e847952d01eee4abe1a27cd5 Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.880570 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" event={"ID":"f221629d-987d-49fe-bcaf-2708f516eec8","Type":"ContainerStarted","Data":"fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.880614 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.889098 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zkmjw" event={"ID":"42e534a6-009e-460c-9664-483a1f93ce63","Type":"ContainerStarted","Data":"fda1bb84bc10f2657b0fef9486c09c7dc8e3e8231e056702ed39466a77841a8a"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.897659 5050 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ln492 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.897871 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" podUID="f221629d-987d-49fe-bcaf-2708f516eec8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.897825 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" event={"ID":"a8c36ad8-2c55-41d9-8bcc-8accc3501626","Type":"ContainerStarted","Data":"f6b330752c43e715b80cef783450ac191c381094c08068a01ca8875b3b943a5b"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.897731 5050 csr.go:261] certificate signing request csr-7qhd6 is approved, waiting to be issued Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.909356 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" event={"ID":"8a22853a-72dd-48ac-aca9-1761185740ba","Type":"ContainerStarted","Data":"e7b40c9ecc7d66e22227fbe6c7ce2b6dbe676df96138398489f3d4449f9cabcc"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.909597 5050 csr.go:257] certificate signing request csr-7qhd6 is issued Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.927704 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:39 crc kubenswrapper[5050]: E0131 05:23:39.928043 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.428028 +0000 UTC m=+145.477189596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.930203 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" event={"ID":"e4380fc4-40ae-4321-bd83-5dce3d68fbae","Type":"ContainerStarted","Data":"87e5cf9a0df27f1c8ae8c71f797005c30792b55ecf6d6f2cd8f7cadc28117666"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.930472 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.932440 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" event={"ID":"a9c06728-b146-4ce3-b975-81e0431c9b38","Type":"ContainerStarted","Data":"1beabb4446c9dcfc78d2ae155c6af0713b35f4abd8baec0737a5694ccb79b059"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.933382 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" event={"ID":"d2736e97-3103-42dc-9d1d-a3bf1b4971ec","Type":"ContainerStarted","Data":"ff5d69a5d007d1cd510a35b69d0dd0b617dcd0ca31aec099faff6e5a8b35bede"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.934159 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" event={"ID":"43fee678-54c4-48f9-a194-720209531460","Type":"ContainerStarted","Data":"1fc4d85045a48c8648f628ff00ced4da513e1ef32ee48d1b97d9fa7c4ba134c2"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.936660 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" event={"ID":"35ac638f-6298-404b-9503-ac7f3aa58e4d","Type":"ContainerStarted","Data":"28e1befc586671cfbe92755d30266036f0586a94b183eb9c2565a534ac9cc4c2"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.948174 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" event={"ID":"a9291437-d019-4d82-99b5-6b7322ea1750","Type":"ContainerStarted","Data":"147dac63b4f0de5f44ca150bd59b2390992bb60a77f7b32e69fa4f9204d620e0"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.970588 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p776r"] Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.971028 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" event={"ID":"1e46f669-9cbc-482c-a124-17a007b3f203","Type":"ContainerStarted","Data":"3e578823265f8bfdff9bca2fab30f41ac9447a9dab00d16f425c31f97aa8c775"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.976746 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" event={"ID":"199faf50-26fd-485c-871d-c4a2d9cc33e6","Type":"ContainerStarted","Data":"86088ff171fce7a39d480f0118e0e9b56fff5516fe0a83c24db97f165fc79692"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.977821 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" event={"ID":"65e79b29-fca8-4411-b6c8-478630090b03","Type":"ContainerStarted","Data":"29854b158fc30ffacefd6ef81f923a4f16fcdf52ef846fb44157931b008b88d4"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.989589 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mwtvl" event={"ID":"c51760c2-79c8-4d25-99a6-bfb51d768be8","Type":"ContainerStarted","Data":"4751da7768e75ce2bb2c631d2c3d9e90a5f80348ca43517854a987d86df9a870"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.996051 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s" event={"ID":"2db7527f-a8bb-431d-ab1c-32c2278822aa","Type":"ContainerStarted","Data":"aec4ea7cf7e5e6ccdab5600b59fe471b6d5c70bd2f5737d76307b3f881ae5b0c"} Jan 31 05:23:39 crc kubenswrapper[5050]: I0131 05:23:39.998596 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4gkcm" event={"ID":"576a09ad-58f8-4eb7-9d08-0e7183a4996b","Type":"ContainerStarted","Data":"576c4f5ca26ef17f68730194cb188bca6887426cbe181be95e2aab66db3cc048"} Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.006555 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.006591 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.006683 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" event={"ID":"24256d22-e420-4442-9064-af05c357a072","Type":"ContainerStarted","Data":"48a01d84d734ea8157fcb8830b47a29c2fb8e5728fabb629a11b1b8bf9d4d96a"} Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.027600 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" event={"ID":"ef79cfe6-00eb-40e7-941f-4013514c4fd2","Type":"ContainerStarted","Data":"59ced06c46ee62d9988148fb62ddddf5aff6297c2722047dda151be0ee7b0007"} Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.028788 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.029046 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.529035732 +0000 UTC m=+145.578197328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.031368 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" event={"ID":"d6402576-676e-4b71-9634-6614fd9a177f","Type":"ContainerStarted","Data":"2fa645872f01b804a66d4e6847817bffed136b4f6e9b9683dd220815106696d0"} Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.032676 5050 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bmrg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.032699 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2bmrg" podUID="066f98b0-80a0-4cdd-ada3-76a1ebab23de" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.033274 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.052085 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.052194 5050 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ck76z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.052222 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" podUID="d152ed50-3f92-49c8-80cc-e73e4046259e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.129678 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.130906 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.630891097 +0000 UTC m=+145.680052693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.140868 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.160932 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-tkrtm" Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.183439 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.683411936 +0000 UTC m=+145.732573532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.232939 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" podStartSLOduration=124.232919508 podStartE2EDuration="2m4.232919508s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.219540329 +0000 UTC m=+145.268701925" watchObservedRunningTime="2026-01-31 05:23:40.232919508 +0000 UTC m=+145.282081104" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.242631 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.243034 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.743019302 +0000 UTC m=+145.792180898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.330925 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-77qqm" podStartSLOduration=124.330908646 podStartE2EDuration="2m4.330908646s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.269825152 +0000 UTC m=+145.318986748" watchObservedRunningTime="2026-01-31 05:23:40.330908646 +0000 UTC m=+145.380070242" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.344369 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.344661 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.844651119 +0000 UTC m=+145.893812715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.374018 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc8t7" podStartSLOduration=125.374004326 podStartE2EDuration="2m5.374004326s" podCreationTimestamp="2026-01-31 05:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.372413815 +0000 UTC m=+145.421575411" watchObservedRunningTime="2026-01-31 05:23:40.374004326 +0000 UTC m=+145.423165922" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.374483 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fk4vq" podStartSLOduration=124.374478163 podStartE2EDuration="2m4.374478163s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.332572279 +0000 UTC m=+145.381733875" watchObservedRunningTime="2026-01-31 05:23:40.374478163 +0000 UTC m=+145.423639759" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.411529 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2bmrg" podStartSLOduration=124.411516512 podStartE2EDuration="2m4.411516512s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.409678142 +0000 UTC m=+145.458839748" watchObservedRunningTime="2026-01-31 05:23:40.411516512 +0000 UTC m=+145.460678108" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.446805 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" podStartSLOduration=124.446791204 podStartE2EDuration="2m4.446791204s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.446025795 +0000 UTC m=+145.495187401" watchObservedRunningTime="2026-01-31 05:23:40.446791204 +0000 UTC m=+145.495952800" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.448054 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.448436 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:40.948424576 +0000 UTC m=+145.997586172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.464246 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x746s" podStartSLOduration=124.464211707 podStartE2EDuration="2m4.464211707s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.462670028 +0000 UTC m=+145.511831624" watchObservedRunningTime="2026-01-31 05:23:40.464211707 +0000 UTC m=+145.513373303" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.549405 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.549866 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.049855844 +0000 UTC m=+146.099017440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.559342 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2rw5" podStartSLOduration=124.559322415 podStartE2EDuration="2m4.559322415s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.512214662 +0000 UTC m=+145.561376258" watchObservedRunningTime="2026-01-31 05:23:40.559322415 +0000 UTC m=+145.608484011" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.600119 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-87m8f" podStartSLOduration=124.600103216 podStartE2EDuration="2m4.600103216s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.561909163 +0000 UTC m=+145.611070759" watchObservedRunningTime="2026-01-31 05:23:40.600103216 +0000 UTC m=+145.649264812" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.649888 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whzj8" podStartSLOduration=124.649870949 podStartE2EDuration="2m4.649870949s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.61126168 +0000 UTC m=+145.660423276" watchObservedRunningTime="2026-01-31 05:23:40.649870949 +0000 UTC m=+145.699032545" Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.650200 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.150185941 +0000 UTC m=+146.199347537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.650125 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.650561 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.650843 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.150836185 +0000 UTC m=+146.199997781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.680135 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-h7wkt" podStartSLOduration=124.680117999 podStartE2EDuration="2m4.680117999s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.649536576 +0000 UTC m=+145.698698172" watchObservedRunningTime="2026-01-31 05:23:40.680117999 +0000 UTC m=+145.729279595" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.704340 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-85mj8" podStartSLOduration=124.704326371 podStartE2EDuration="2m4.704326371s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.703182877 +0000 UTC m=+145.752344473" watchObservedRunningTime="2026-01-31 05:23:40.704326371 +0000 UTC m=+145.753487967" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.749081 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" podStartSLOduration=125.749066633 podStartE2EDuration="2m5.749066633s" podCreationTimestamp="2026-01-31 05:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.748898436 +0000 UTC m=+145.798060032" watchObservedRunningTime="2026-01-31 05:23:40.749066633 +0000 UTC m=+145.798228229" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.751270 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.751534 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.251520936 +0000 UTC m=+146.300682532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.848601 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-65vlt" podStartSLOduration=125.848585269 podStartE2EDuration="2m5.848585269s" podCreationTimestamp="2026-01-31 05:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.80052542 +0000 UTC m=+145.849687016" watchObservedRunningTime="2026-01-31 05:23:40.848585269 +0000 UTC m=+145.897746865" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.856318 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.856574 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.356564951 +0000 UTC m=+146.405726547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.893950 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" podStartSLOduration=124.893932903 podStartE2EDuration="2m4.893932903s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.893168164 +0000 UTC m=+145.942329760" watchObservedRunningTime="2026-01-31 05:23:40.893932903 +0000 UTC m=+145.943094499" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.911621 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 05:18:39 +0000 UTC, rotation deadline is 2026-11-24 04:00:46.664428135 +0000 UTC Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.911654 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7126h37m5.752776427s for next certificate rotation Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.916154 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7l54" podStartSLOduration=125.916141068 podStartE2EDuration="2m5.916141068s" podCreationTimestamp="2026-01-31 05:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:40.914706274 +0000 UTC m=+145.963867880" watchObservedRunningTime="2026-01-31 05:23:40.916141068 +0000 UTC m=+145.965302664" Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.957178 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.957328 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.457304204 +0000 UTC m=+146.506465800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:40 crc kubenswrapper[5050]: I0131 05:23:40.957480 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:40 crc kubenswrapper[5050]: E0131 05:23:40.957768 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.457757572 +0000 UTC m=+146.506919168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.009185 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 05:23:41 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Jan 31 05:23:41 crc kubenswrapper[5050]: [+]process-running ok Jan 31 05:23:41 crc kubenswrapper[5050]: healthz check failed Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.009248 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.060102 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:41 crc kubenswrapper[5050]: E0131 05:23:41.060323 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.560307622 +0000 UTC m=+146.609469218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.060666 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:41 crc kubenswrapper[5050]: E0131 05:23:41.061083 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.561075351 +0000 UTC m=+146.610236947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.062061 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4gkcm" event={"ID":"576a09ad-58f8-4eb7-9d08-0e7183a4996b","Type":"ContainerStarted","Data":"57401eefde4358fdd74faf57fd810e179e84bde2beadbfbaa9f11aa86f1f3aa4"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.070611 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" event={"ID":"ef79cfe6-00eb-40e7-941f-4013514c4fd2","Type":"ContainerStarted","Data":"1e62a208c56ec8634a249922cfcfea45754c2cd1aaaa8afde0969ee3b7b40001"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.088786 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-4gkcm" podStartSLOduration=6.088770725 podStartE2EDuration="6.088770725s" podCreationTimestamp="2026-01-31 05:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.086768399 +0000 UTC m=+146.135929995" watchObservedRunningTime="2026-01-31 05:23:41.088770725 +0000 UTC m=+146.137932311" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.103603 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" event={"ID":"199faf50-26fd-485c-871d-c4a2d9cc33e6","Type":"ContainerStarted","Data":"a4d6f1c2cd2dcb554be4ec3f1a9dab7afc0084fb97ed560b2222d54fa94acdb7"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.104452 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.107364 5050 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kn2nd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.107409 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" podUID="199faf50-26fd-485c-871d-c4a2d9cc33e6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.108423 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-blplz" event={"ID":"7b4f1dd9-6c94-4551-a88f-2b83c154962a","Type":"ContainerStarted","Data":"68c0ab8f438dd1343266fdef19d66dcb48718f40c2ddb7dd287fdd616694ff56"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.108451 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-blplz" event={"ID":"7b4f1dd9-6c94-4551-a88f-2b83c154962a","Type":"ContainerStarted","Data":"dbbd633bec1de7f369fed42ff129bf15a0950f48e847952d01eee4abe1a27cd5"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.119481 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" event={"ID":"65e79b29-fca8-4411-b6c8-478630090b03","Type":"ContainerStarted","Data":"b2052fa86bdc24eda16e2201f6f4b8639e8fa4aea35d927eb0f49cddb1350012"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.124764 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" podStartSLOduration=125.124749744 podStartE2EDuration="2m5.124749744s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.123992394 +0000 UTC m=+146.173153980" watchObservedRunningTime="2026-01-31 05:23:41.124749744 +0000 UTC m=+146.173911330" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.135390 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" event={"ID":"43fee678-54c4-48f9-a194-720209531460","Type":"ContainerStarted","Data":"4c77ae44b56487e520b8cb27946e38dbe1e62a3b8de8162b506a84665af748f9"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.136851 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.138643 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mwtvl" event={"ID":"c51760c2-79c8-4d25-99a6-bfb51d768be8","Type":"ContainerStarted","Data":"1005bb4f26d56df980a49266d71ad7b48eb402cf5a6520edcde223672466414d"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.158173 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" event={"ID":"d2736e97-3103-42dc-9d1d-a3bf1b4971ec","Type":"ContainerStarted","Data":"0b9ab29d4cd641898a3e8d01a178fd35a406f6cfa0486192b72f3adfcd026bc5"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.164376 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:41 crc kubenswrapper[5050]: E0131 05:23:41.164555 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.664538337 +0000 UTC m=+146.713699933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.164615 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:41 crc kubenswrapper[5050]: E0131 05:23:41.165568 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.665561346 +0000 UTC m=+146.714722942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.183272 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mpjm" podStartSLOduration=125.18325722 podStartE2EDuration="2m5.18325722s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.15436032 +0000 UTC m=+146.203521916" watchObservedRunningTime="2026-01-31 05:23:41.18325722 +0000 UTC m=+146.232418816" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.183595 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-blplz" podStartSLOduration=125.183591792 podStartE2EDuration="2m5.183591792s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.174179894 +0000 UTC m=+146.223341480" watchObservedRunningTime="2026-01-31 05:23:41.183591792 +0000 UTC m=+146.232753388" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.185003 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" event={"ID":"842445b4-a5eb-48a6-b6c7-ba426d8fab6c","Type":"ContainerStarted","Data":"47eb6174e5c01f4e9d40e2f6a63df7fe9f82d2fd3222e1f07ade31fc4b5d7027"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.185042 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" event={"ID":"842445b4-a5eb-48a6-b6c7-ba426d8fab6c","Type":"ContainerStarted","Data":"d083de35dac71dce53d26d1d0d5646b78dd172b9eb15d73da9ed36c296c1e140"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.185908 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.193315 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" event={"ID":"a8c36ad8-2c55-41d9-8bcc-8accc3501626","Type":"ContainerStarted","Data":"c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.194323 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.195064 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ddnzh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.195097 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" podUID="842445b4-a5eb-48a6-b6c7-ba426d8fab6c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.196192 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gl9vw" podStartSLOduration=125.196184521 podStartE2EDuration="2m5.196184521s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.195589398 +0000 UTC m=+146.244750994" watchObservedRunningTime="2026-01-31 05:23:41.196184521 +0000 UTC m=+146.245346117" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.200755 5050 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g9jhn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.200802 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" podUID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.216793 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" event={"ID":"a9291437-d019-4d82-99b5-6b7322ea1750","Type":"ContainerStarted","Data":"964b0ccf437a037713a9b492190fd161ed69bcd05717f90f3c185275aa8751d4"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.217636 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.233643 5050 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-fhsww container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.233706 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" podUID="a9291437-d019-4d82-99b5-6b7322ea1750" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.236952 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hfmnk" event={"ID":"9607a267-53c0-4432-b3aa-dd7d0e04ba77","Type":"ContainerStarted","Data":"52e6b0378e176b2a5f687ad093eb347c8c2c55d0812c08915136469378a2b06c"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.250342 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zkmjw" event={"ID":"42e534a6-009e-460c-9664-483a1f93ce63","Type":"ContainerStarted","Data":"81038ee7ee4ec692ed1b18d6730fb3d86858cc6ea5fd5d1f1f5e92a732f11f5d"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.250372 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zkmjw" event={"ID":"42e534a6-009e-460c-9664-483a1f93ce63","Type":"ContainerStarted","Data":"bba116782694f5e4d706d38276a115d4ff70670d037382dfc50a1bdb85d1fdf5"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.259199 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" podStartSLOduration=125.259187348 podStartE2EDuration="2m5.259187348s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.221080468 +0000 UTC m=+146.270242054" watchObservedRunningTime="2026-01-31 05:23:41.259187348 +0000 UTC m=+146.308348944" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.259438 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" podStartSLOduration=125.259435057 podStartE2EDuration="2m5.259435057s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.257719922 +0000 UTC m=+146.306881518" watchObservedRunningTime="2026-01-31 05:23:41.259435057 +0000 UTC m=+146.308596653" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.270529 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:41 crc kubenswrapper[5050]: E0131 05:23:41.271067 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.771051068 +0000 UTC m=+146.820212654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.273235 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" event={"ID":"a9c06728-b146-4ce3-b975-81e0431c9b38","Type":"ContainerStarted","Data":"98b3f8cf7840983b9272c6f26d64580231654add9d826355cdb9de35a631c029"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.286185 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" podStartSLOduration=125.286167814 podStartE2EDuration="2m5.286167814s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.285644844 +0000 UTC m=+146.334806440" watchObservedRunningTime="2026-01-31 05:23:41.286167814 +0000 UTC m=+146.335329410" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.297130 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" event={"ID":"57e06675-0696-4d1e-9058-920532a96cdf","Type":"ContainerStarted","Data":"80574b9373f8eed10b826d82ed8bb796fcce5d8d1544de8f15bbb2c2f0a09992"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.297173 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" event={"ID":"57e06675-0696-4d1e-9058-920532a96cdf","Type":"ContainerStarted","Data":"d8a9702f949b0a2631a4ae0cd8fb3332260e6623ed08c2a75314cbb5cd01133f"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.300680 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" event={"ID":"5915d8a1-8561-481b-990d-60cd35f30d7c","Type":"ContainerStarted","Data":"02ce8716faf717215c1eeb2a1c91391df3342c073322f56540b995807b7c763d"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.301453 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" podStartSLOduration=125.301440345 podStartE2EDuration="2m5.301440345s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.300211478 +0000 UTC m=+146.349373064" watchObservedRunningTime="2026-01-31 05:23:41.301440345 +0000 UTC m=+146.350601941" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.302180 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" event={"ID":"83e6fe13-8779-4d8b-998e-75f7b39ea426","Type":"ContainerStarted","Data":"b12e7da53e654fc47b9392cfab65b943505c4460359ea6d320cbcd5869dc5409"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.303692 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xkv6l" event={"ID":"e4ab724a-e633-4639-938c-317c550ba114","Type":"ContainerStarted","Data":"85b7f55cdf2f4bfde2e26d37d85af148eaa71f00818fcb0f2ae31e62706b541e"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.303715 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xkv6l" event={"ID":"e4ab724a-e633-4639-938c-317c550ba114","Type":"ContainerStarted","Data":"140cf964e8aca8200525ba3849f0814454f9a88ca064a30da0e8e25c2e63b3bd"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.325434 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zkmjw" podStartSLOduration=125.325418737 podStartE2EDuration="2m5.325418737s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.32524293 +0000 UTC m=+146.374404526" watchObservedRunningTime="2026-01-31 05:23:41.325418737 +0000 UTC m=+146.374580333" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.328828 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v7nml" event={"ID":"91e87770-5e80-48f8-b274-31b0399b9935","Type":"ContainerStarted","Data":"3840cf76d6e8f8733e9b8279c0ca05c2e86b8429312f3c428dea65961dfe23e3"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.343538 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cqnnx" podStartSLOduration=125.343522675 podStartE2EDuration="2m5.343522675s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.342281688 +0000 UTC m=+146.391443284" watchObservedRunningTime="2026-01-31 05:23:41.343522675 +0000 UTC m=+146.392684261" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.371428 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" event={"ID":"ffe3ba47-1c85-4aa1-b9a8-3c9cd14c2f02","Type":"ContainerStarted","Data":"06e4dd57e77e6f06752cf80c78ef750995499b15060cf20b795b72c188226cc2"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.372215 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:41 crc kubenswrapper[5050]: E0131 05:23:41.374079 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.874068108 +0000 UTC m=+146.923229704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.375063 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wkxcn" event={"ID":"928c5e09-96f9-4f04-b797-e23c1efa1bcf","Type":"ContainerStarted","Data":"c5297ffc782e0a0004612a868e2ea81b2d73758be4d632bbcf331df36214211f"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.380724 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" event={"ID":"d6402576-676e-4b71-9634-6614fd9a177f","Type":"ContainerStarted","Data":"97fe192f4955be659d5f2db50f36d1992804811f348163a9ef24b7375f72636d"} Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.398773 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.426619 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xkv6l" podStartSLOduration=6.426605046 podStartE2EDuration="6.426605046s" podCreationTimestamp="2026-01-31 05:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.386084055 +0000 UTC m=+146.435245651" watchObservedRunningTime="2026-01-31 05:23:41.426605046 +0000 UTC m=+146.475766642" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.428519 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" podStartSLOduration=125.428512409 podStartE2EDuration="2m5.428512409s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.42590585 +0000 UTC m=+146.475067446" watchObservedRunningTime="2026-01-31 05:23:41.428512409 +0000 UTC m=+146.477674005" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.473533 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:41 crc kubenswrapper[5050]: E0131 05:23:41.475151 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:41.975137462 +0000 UTC m=+147.024299058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.527541 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-v7nml" podStartSLOduration=126.527525006 podStartE2EDuration="2m6.527525006s" podCreationTimestamp="2026-01-31 05:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.484823291 +0000 UTC m=+146.533984887" watchObservedRunningTime="2026-01-31 05:23:41.527525006 +0000 UTC m=+146.576686602" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.528551 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" podStartSLOduration=125.528547634 podStartE2EDuration="2m5.528547634s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.52661252 +0000 UTC m=+146.575774117" watchObservedRunningTime="2026-01-31 05:23:41.528547634 +0000 UTC m=+146.577709230" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.579537 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d5cf9" podStartSLOduration=125.579514953 podStartE2EDuration="2m5.579514953s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.574582125 +0000 UTC m=+146.623743721" watchObservedRunningTime="2026-01-31 05:23:41.579514953 +0000 UTC m=+146.628676549" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.580002 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:41 crc kubenswrapper[5050]: E0131 05:23:41.580379 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.080367085 +0000 UTC m=+147.129528681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.680670 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:41 crc kubenswrapper[5050]: E0131 05:23:41.680930 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.18091653 +0000 UTC m=+147.230078126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.783721 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:41 crc kubenswrapper[5050]: E0131 05:23:41.784351 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.284340944 +0000 UTC m=+147.333502540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.880840 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.885393 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:41 crc kubenswrapper[5050]: E0131 05:23:41.885640 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.385612996 +0000 UTC m=+147.434774592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.918426 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kmd7l" podStartSLOduration=125.918411925 podStartE2EDuration="2m5.918411925s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:41.648708025 +0000 UTC m=+146.697869611" watchObservedRunningTime="2026-01-31 05:23:41.918411925 +0000 UTC m=+146.967573521" Jan 31 05:23:41 crc kubenswrapper[5050]: I0131 05:23:41.987075 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:41 crc kubenswrapper[5050]: E0131 05:23:41.987470 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.487454651 +0000 UTC m=+147.536616247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.009480 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 05:23:42 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Jan 31 05:23:42 crc kubenswrapper[5050]: [+]process-running ok Jan 31 05:23:42 crc kubenswrapper[5050]: healthz check failed Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.009534 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.061805 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.061928 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.087682 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:42 crc kubenswrapper[5050]: E0131 05:23:42.087852 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.587828159 +0000 UTC m=+147.636989755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.088038 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:42 crc kubenswrapper[5050]: E0131 05:23:42.088300 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.588288896 +0000 UTC m=+147.637450492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.189609 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:42 crc kubenswrapper[5050]: E0131 05:23:42.189740 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.689716064 +0000 UTC m=+147.738877660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.189873 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:42 crc kubenswrapper[5050]: E0131 05:23:42.190189 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.690176232 +0000 UTC m=+147.739337828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.264083 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.291266 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:42 crc kubenswrapper[5050]: E0131 05:23:42.291457 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.791423314 +0000 UTC m=+147.840584910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.291570 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:42 crc kubenswrapper[5050]: E0131 05:23:42.291841 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.791825859 +0000 UTC m=+147.840987455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.337568 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.337609 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.394490 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:42 crc kubenswrapper[5050]: E0131 05:23:42.394880 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.894866689 +0000 UTC m=+147.944028285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.404530 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" event={"ID":"8a22853a-72dd-48ac-aca9-1761185740ba","Type":"ContainerStarted","Data":"341b3924c00d9b9fe0f84a42d201bd748456fe250b47f29709c7f3f01f0ecc34"} Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.404572 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" event={"ID":"8a22853a-72dd-48ac-aca9-1761185740ba","Type":"ContainerStarted","Data":"6c03b06bc588be5d4af31a092189d9ce3fe5d25ff4a273f6a4f18262f9487b33"} Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.406116 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" event={"ID":"43fee678-54c4-48f9-a194-720209531460","Type":"ContainerStarted","Data":"3e034328edaf624ce29b73ed6f1309a61e78a0ff433799598e4063819b0a9f4b"} Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.408910 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wkxcn" event={"ID":"928c5e09-96f9-4f04-b797-e23c1efa1bcf","Type":"ContainerStarted","Data":"e3c5d06d169f70815a06096fc2e13237011e884bf6d381dd3a32f7351ab5f2ea"} Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.411782 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hfmnk" event={"ID":"9607a267-53c0-4432-b3aa-dd7d0e04ba77","Type":"ContainerStarted","Data":"f33100ba72f802cfd4fad3282b61947250b6e6f9a7b4517eb11bc53813b53ccb"} Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.411805 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hfmnk" event={"ID":"9607a267-53c0-4432-b3aa-dd7d0e04ba77","Type":"ContainerStarted","Data":"a829feba333b51d22e5cd1fe508a1667ad323220e63e20acd641ab2c46087f00"} Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.412134 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.414473 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mwtvl" event={"ID":"c51760c2-79c8-4d25-99a6-bfb51d768be8","Type":"ContainerStarted","Data":"0e92edd1aaae052aea31d1b8749a0fa422c34b44ddc9a31edc0a6abd2d0f0f05"} Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.417338 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" event={"ID":"57e06675-0696-4d1e-9058-920532a96cdf","Type":"ContainerStarted","Data":"57c0fe1b5f5a4df16480b47f8d1b5902f9335b6eb153d60d78056ca3be3bfc4e"} Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.425926 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" event={"ID":"ef79cfe6-00eb-40e7-941f-4013514c4fd2","Type":"ContainerStarted","Data":"56970b16a193d6a1fef00794ac016ca94ada1758116da1aff2fd03800a36790d"} Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.431631 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wkxcn" podStartSLOduration=126.431616736 podStartE2EDuration="2m6.431616736s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:42.429693594 +0000 UTC m=+147.478855190" watchObservedRunningTime="2026-01-31 05:23:42.431616736 +0000 UTC m=+147.480778322" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.441070 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v7nml" event={"ID":"91e87770-5e80-48f8-b274-31b0399b9935","Type":"ContainerStarted","Data":"343d9b83f4ceca388ff7ffdbc7b9ca59d5e024537d29f3409c58db73cd74a029"} Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.442718 5050 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g9jhn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.442755 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" podUID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.466585 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hfmnk" podStartSLOduration=7.466567646 podStartE2EDuration="7.466567646s" podCreationTimestamp="2026-01-31 05:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:42.461563686 +0000 UTC m=+147.510725282" watchObservedRunningTime="2026-01-31 05:23:42.466567646 +0000 UTC m=+147.515729242" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.467077 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lm2gr" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.489316 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fhsww" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.489455 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kn2nd" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.496433 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:42 crc kubenswrapper[5050]: E0131 05:23:42.498575 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:42.998563513 +0000 UTC m=+148.047725109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.511798 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gcjhn" podStartSLOduration=126.511782236 podStartE2EDuration="2m6.511782236s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:42.508779442 +0000 UTC m=+147.557941038" watchObservedRunningTime="2026-01-31 05:23:42.511782236 +0000 UTC m=+147.560943842" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.553762 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p776r" podStartSLOduration=126.553749352 podStartE2EDuration="2m6.553749352s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:42.551708595 +0000 UTC m=+147.600870191" watchObservedRunningTime="2026-01-31 05:23:42.553749352 +0000 UTC m=+147.602910948" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.597745 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:42 crc kubenswrapper[5050]: E0131 05:23:42.599386 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.099371588 +0000 UTC m=+148.148533184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.628841 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-mwtvl" podStartSLOduration=126.628827228 podStartE2EDuration="2m6.628827228s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:42.582969884 +0000 UTC m=+147.632131480" watchObservedRunningTime="2026-01-31 05:23:42.628827228 +0000 UTC m=+147.677988824" Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.704345 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:42 crc kubenswrapper[5050]: E0131 05:23:42.704628 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.204616531 +0000 UTC m=+148.253778127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.805368 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:42 crc kubenswrapper[5050]: E0131 05:23:42.806035 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.306020699 +0000 UTC m=+148.355182295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:42 crc kubenswrapper[5050]: I0131 05:23:42.907268 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:42 crc kubenswrapper[5050]: E0131 05:23:42.907693 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.407676446 +0000 UTC m=+148.456838042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.010070 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 05:23:43 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Jan 31 05:23:43 crc kubenswrapper[5050]: [+]process-running ok Jan 31 05:23:43 crc kubenswrapper[5050]: healthz check failed Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.010116 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.010786 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.010875 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.510864131 +0000 UTC m=+148.560025717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.011064 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.011306 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.511298558 +0000 UTC m=+148.560460154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.112484 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.112806 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.612792789 +0000 UTC m=+148.661954385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.150240 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddnzh" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.213459 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.213723 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.713712878 +0000 UTC m=+148.762874464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.314138 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.314466 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.81445226 +0000 UTC m=+148.863613856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.324606 5050 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.356488 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lkjld" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.415922 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.416903 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:43.916876206 +0000 UTC m=+148.966037812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.460486 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" event={"ID":"8a22853a-72dd-48ac-aca9-1761185740ba","Type":"ContainerStarted","Data":"38c882930ccea6e1233ac8927a3abc3a406f46de2a28b8cb98e6b471c7b0cafd"} Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.460998 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" event={"ID":"8a22853a-72dd-48ac-aca9-1761185740ba","Type":"ContainerStarted","Data":"b7621eb6ef522faefb0975abac42908e285ed0020de846f5833f18ab3f349efe"} Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.461722 5050 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g9jhn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.461824 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" podUID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.495124 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pqgfr" podStartSLOduration=8.495110192 podStartE2EDuration="8.495110192s" podCreationTimestamp="2026-01-31 05:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:43.492902158 +0000 UTC m=+148.542063754" watchObservedRunningTime="2026-01-31 05:23:43.495110192 +0000 UTC m=+148.544271788" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.517354 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.517522 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:44.017497594 +0000 UTC m=+149.066659190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.517772 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.520646 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:44.020629223 +0000 UTC m=+149.069790909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.621310 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.621667 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:44.121652606 +0000 UTC m=+149.170814202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.639509 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zdgsp"] Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.640346 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.642484 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.669346 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdgsp"] Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.723122 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.723174 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a80941-a665-4ff2-8f03-841e88b654cc-catalog-content\") pod \"community-operators-zdgsp\" (UID: \"f2a80941-a665-4ff2-8f03-841e88b654cc\") " pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.723207 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.723229 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a80941-a665-4ff2-8f03-841e88b654cc-utilities\") pod \"community-operators-zdgsp\" (UID: \"f2a80941-a665-4ff2-8f03-841e88b654cc\") " pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.723265 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmkht\" (UniqueName: \"kubernetes.io/projected/f2a80941-a665-4ff2-8f03-841e88b654cc-kube-api-access-mmkht\") pod \"community-operators-zdgsp\" (UID: \"f2a80941-a665-4ff2-8f03-841e88b654cc\") " pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.724334 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:44.224321251 +0000 UTC m=+149.273482847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.728563 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.824855 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tnvhs"] Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.825673 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.826581 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.826748 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a80941-a665-4ff2-8f03-841e88b654cc-catalog-content\") pod \"community-operators-zdgsp\" (UID: \"f2a80941-a665-4ff2-8f03-841e88b654cc\") " pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.826771 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.826807 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:44.326782279 +0000 UTC m=+149.375943865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.826850 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.826904 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a80941-a665-4ff2-8f03-841e88b654cc-utilities\") pod \"community-operators-zdgsp\" (UID: \"f2a80941-a665-4ff2-8f03-841e88b654cc\") " pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.827011 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmkht\" (UniqueName: \"kubernetes.io/projected/f2a80941-a665-4ff2-8f03-841e88b654cc-kube-api-access-mmkht\") pod \"community-operators-zdgsp\" (UID: \"f2a80941-a665-4ff2-8f03-841e88b654cc\") " pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.827042 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.827093 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.827423 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:44.327407442 +0000 UTC m=+149.376569038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.828080 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.828102 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a80941-a665-4ff2-8f03-841e88b654cc-catalog-content\") pod \"community-operators-zdgsp\" (UID: \"f2a80941-a665-4ff2-8f03-841e88b654cc\") " pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.828123 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a80941-a665-4ff2-8f03-841e88b654cc-utilities\") pod \"community-operators-zdgsp\" (UID: \"f2a80941-a665-4ff2-8f03-841e88b654cc\") " pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.832443 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.833612 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.834321 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.852020 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnvhs"] Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.892837 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmkht\" (UniqueName: \"kubernetes.io/projected/f2a80941-a665-4ff2-8f03-841e88b654cc-kube-api-access-mmkht\") pod \"community-operators-zdgsp\" (UID: \"f2a80941-a665-4ff2-8f03-841e88b654cc\") " pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.912489 5050 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T05:23:43.324624847Z","Handler":null,"Name":""} Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.928596 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.928703 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:44.428689555 +0000 UTC m=+149.477851151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.928875 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4hxz\" (UniqueName: \"kubernetes.io/projected/29fd7267-f00e-4b58-bdab-55bf2d0c801c-kube-api-access-h4hxz\") pod \"certified-operators-tnvhs\" (UID: \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\") " pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.928969 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29fd7267-f00e-4b58-bdab-55bf2d0c801c-catalog-content\") pod \"certified-operators-tnvhs\" (UID: \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\") " pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.929000 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.929052 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29fd7267-f00e-4b58-bdab-55bf2d0c801c-utilities\") pod \"certified-operators-tnvhs\" (UID: \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\") " pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:23:43 crc kubenswrapper[5050]: E0131 05:23:43.929325 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:44.429317919 +0000 UTC m=+149.478479515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.958659 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.959454 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.973229 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:43 crc kubenswrapper[5050]: I0131 05:23:43.980170 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.021019 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 05:23:44 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Jan 31 05:23:44 crc kubenswrapper[5050]: [+]process-running ok Jan 31 05:23:44 crc kubenswrapper[5050]: healthz check failed Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.021069 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.030103 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gh8nk"] Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.030994 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.048413 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gh8nk"] Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.062134 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.062437 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4hxz\" (UniqueName: \"kubernetes.io/projected/29fd7267-f00e-4b58-bdab-55bf2d0c801c-kube-api-access-h4hxz\") pod \"certified-operators-tnvhs\" (UID: \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\") " pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.062559 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29fd7267-f00e-4b58-bdab-55bf2d0c801c-catalog-content\") pod \"certified-operators-tnvhs\" (UID: \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\") " pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.062663 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29fd7267-f00e-4b58-bdab-55bf2d0c801c-utilities\") pod \"certified-operators-tnvhs\" (UID: \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\") " pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.063187 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29fd7267-f00e-4b58-bdab-55bf2d0c801c-utilities\") pod \"certified-operators-tnvhs\" (UID: \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\") " pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:23:44 crc kubenswrapper[5050]: E0131 05:23:44.064541 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 05:23:44.564522132 +0000 UTC m=+149.613683728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.064807 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29fd7267-f00e-4b58-bdab-55bf2d0c801c-catalog-content\") pod \"certified-operators-tnvhs\" (UID: \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\") " pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.121753 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4hxz\" (UniqueName: \"kubernetes.io/projected/29fd7267-f00e-4b58-bdab-55bf2d0c801c-kube-api-access-h4hxz\") pod \"certified-operators-tnvhs\" (UID: \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\") " pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.163925 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.163993 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b775892b-5d01-4235-995f-5f38f01122ee-catalog-content\") pod \"community-operators-gh8nk\" (UID: \"b775892b-5d01-4235-995f-5f38f01122ee\") " pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.164033 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d9kn\" (UniqueName: \"kubernetes.io/projected/b775892b-5d01-4235-995f-5f38f01122ee-kube-api-access-8d9kn\") pod \"community-operators-gh8nk\" (UID: \"b775892b-5d01-4235-995f-5f38f01122ee\") " pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.164049 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b775892b-5d01-4235-995f-5f38f01122ee-utilities\") pod \"community-operators-gh8nk\" (UID: \"b775892b-5d01-4235-995f-5f38f01122ee\") " pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:23:44 crc kubenswrapper[5050]: E0131 05:23:44.164302 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 05:23:44.664287698 +0000 UTC m=+149.713449294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mvp9" (UID: "82582675-89e4-4783-84df-ea11774c62aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.167768 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.219122 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-v7nml container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 05:23:44 crc kubenswrapper[5050]: [+]log ok Jan 31 05:23:44 crc kubenswrapper[5050]: [+]etcd ok Jan 31 05:23:44 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 05:23:44 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 05:23:44 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Jan 31 05:23:44 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 05:23:44 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 05:23:44 crc kubenswrapper[5050]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 05:23:44 crc kubenswrapper[5050]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 31 05:23:44 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 05:23:44 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 05:23:44 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Jan 31 05:23:44 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 31 05:23:44 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 05:23:44 crc kubenswrapper[5050]: livez check failed Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.219176 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-v7nml" podUID="91e87770-5e80-48f8-b274-31b0399b9935" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.242217 5050 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.242254 5050 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.245193 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mfttr"] Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.246053 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.264534 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.264709 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b775892b-5d01-4235-995f-5f38f01122ee-catalog-content\") pod \"community-operators-gh8nk\" (UID: \"b775892b-5d01-4235-995f-5f38f01122ee\") " pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.264750 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d9kn\" (UniqueName: \"kubernetes.io/projected/b775892b-5d01-4235-995f-5f38f01122ee-kube-api-access-8d9kn\") pod \"community-operators-gh8nk\" (UID: \"b775892b-5d01-4235-995f-5f38f01122ee\") " pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.264767 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b775892b-5d01-4235-995f-5f38f01122ee-utilities\") pod \"community-operators-gh8nk\" (UID: \"b775892b-5d01-4235-995f-5f38f01122ee\") " pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.265463 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b775892b-5d01-4235-995f-5f38f01122ee-utilities\") pod \"community-operators-gh8nk\" (UID: \"b775892b-5d01-4235-995f-5f38f01122ee\") " pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.265747 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b775892b-5d01-4235-995f-5f38f01122ee-catalog-content\") pod \"community-operators-gh8nk\" (UID: \"b775892b-5d01-4235-995f-5f38f01122ee\") " pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.269062 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mfttr"] Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.297535 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.314327 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d9kn\" (UniqueName: \"kubernetes.io/projected/b775892b-5d01-4235-995f-5f38f01122ee-kube-api-access-8d9kn\") pod \"community-operators-gh8nk\" (UID: \"b775892b-5d01-4235-995f-5f38f01122ee\") " pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.370459 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjwwc\" (UniqueName: \"kubernetes.io/projected/7c0f8d83-483d-499f-9fbc-c11768d3e97e-kube-api-access-kjwwc\") pod \"certified-operators-mfttr\" (UID: \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\") " pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.370736 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0f8d83-483d-499f-9fbc-c11768d3e97e-utilities\") pod \"certified-operators-mfttr\" (UID: \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\") " pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.370766 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.370820 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0f8d83-483d-499f-9fbc-c11768d3e97e-catalog-content\") pod \"certified-operators-mfttr\" (UID: \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\") " pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.371285 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.453837 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.453882 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.472515 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjwwc\" (UniqueName: \"kubernetes.io/projected/7c0f8d83-483d-499f-9fbc-c11768d3e97e-kube-api-access-kjwwc\") pod \"certified-operators-mfttr\" (UID: \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\") " pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.472558 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0f8d83-483d-499f-9fbc-c11768d3e97e-utilities\") pod \"certified-operators-mfttr\" (UID: \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\") " pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.472620 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0f8d83-483d-499f-9fbc-c11768d3e97e-catalog-content\") pod \"certified-operators-mfttr\" (UID: \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\") " pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.473080 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0f8d83-483d-499f-9fbc-c11768d3e97e-catalog-content\") pod \"certified-operators-mfttr\" (UID: \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\") " pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.473489 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0f8d83-483d-499f-9fbc-c11768d3e97e-utilities\") pod \"certified-operators-mfttr\" (UID: \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\") " pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.514260 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjwwc\" (UniqueName: \"kubernetes.io/projected/7c0f8d83-483d-499f-9fbc-c11768d3e97e-kube-api-access-kjwwc\") pod \"certified-operators-mfttr\" (UID: \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\") " pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.580873 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mvp9\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.593566 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.639315 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.707870 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnvhs"] Jan 31 05:23:44 crc kubenswrapper[5050]: W0131 05:23:44.750271 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29fd7267_f00e_4b58_bdab_55bf2d0c801c.slice/crio-bee6f59ed3676acee985afa4695b4a610d1f11451ee2beea7d5d22c2d5aedf73 WatchSource:0}: Error finding container bee6f59ed3676acee985afa4695b4a610d1f11451ee2beea7d5d22c2d5aedf73: Status 404 returned error can't find the container with id bee6f59ed3676acee985afa4695b4a610d1f11451ee2beea7d5d22c2d5aedf73 Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.799385 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdgsp"] Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.866197 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gh8nk"] Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.883971 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.884529 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.889415 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.889619 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.947280 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.984761 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8mvp9"] Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.987896 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7a356c0-6077-4b98-bebd-c617757e4124-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a7a356c0-6077-4b98-bebd-c617757e4124\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 05:23:44 crc kubenswrapper[5050]: I0131 05:23:44.987943 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7a356c0-6077-4b98-bebd-c617757e4124-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a7a356c0-6077-4b98-bebd-c617757e4124\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.007411 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 05:23:45 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Jan 31 05:23:45 crc kubenswrapper[5050]: [+]process-running ok Jan 31 05:23:45 crc kubenswrapper[5050]: healthz check failed Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.007470 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.089147 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7a356c0-6077-4b98-bebd-c617757e4124-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a7a356c0-6077-4b98-bebd-c617757e4124\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.089187 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7a356c0-6077-4b98-bebd-c617757e4124-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a7a356c0-6077-4b98-bebd-c617757e4124\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.089239 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7a356c0-6077-4b98-bebd-c617757e4124-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a7a356c0-6077-4b98-bebd-c617757e4124\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.111768 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7a356c0-6077-4b98-bebd-c617757e4124-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a7a356c0-6077-4b98-bebd-c617757e4124\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.203989 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.232575 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mfttr"] Jan 31 05:23:45 crc kubenswrapper[5050]: W0131 05:23:45.245496 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c0f8d83_483d_499f_9fbc_c11768d3e97e.slice/crio-7098bf69519db9ada10910857bb7488b1c08d79c2c4b77e60f24e5af3166c3f3 WatchSource:0}: Error finding container 7098bf69519db9ada10910857bb7488b1c08d79c2c4b77e60f24e5af3166c3f3: Status 404 returned error can't find the container with id 7098bf69519db9ada10910857bb7488b1c08d79c2c4b77e60f24e5af3166c3f3 Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.464522 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 05:23:45 crc kubenswrapper[5050]: W0131 05:23:45.474233 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda7a356c0_6077_4b98_bebd_c617757e4124.slice/crio-c56b18eb4542c0d47f89724947e517c914a43bf8a992c652c9fabc85a72e57fd WatchSource:0}: Error finding container c56b18eb4542c0d47f89724947e517c914a43bf8a992c652c9fabc85a72e57fd: Status 404 returned error can't find the container with id c56b18eb4542c0d47f89724947e517c914a43bf8a992c652c9fabc85a72e57fd Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.485302 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0636c9e674bdd5da1bee641b244b77b4e099d26ca3ce2ac8b839f2f091e3d933"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.485341 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"780192d07431cee0deebe4eeae6c5cf05b674f0739744cc33d50d48db58f7b36"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.490912 5050 generic.go:334] "Generic (PLEG): container finished" podID="b775892b-5d01-4235-995f-5f38f01122ee" containerID="4213ce4e6b5c2a78979df6d308cafc6a5d97b251eb2046aff6b0204a71be1212" exitCode=0 Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.491218 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh8nk" event={"ID":"b775892b-5d01-4235-995f-5f38f01122ee","Type":"ContainerDied","Data":"4213ce4e6b5c2a78979df6d308cafc6a5d97b251eb2046aff6b0204a71be1212"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.491245 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh8nk" event={"ID":"b775892b-5d01-4235-995f-5f38f01122ee","Type":"ContainerStarted","Data":"fba07c56d09c6713cf43fcfe996c0521bffdbcf85bb505493f5110dfb1e20e36"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.493839 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.497083 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d7309dd6402b4c6747b177a8021ca3028b67a09aca2a31beb55d2179a20e781a"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.497112 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3fc2567746459ea34e83725e5d3abc6baf96377058efbbf8bb6f16effaa24b9a"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.498361 5050 generic.go:334] "Generic (PLEG): container finished" podID="5915d8a1-8561-481b-990d-60cd35f30d7c" containerID="02ce8716faf717215c1eeb2a1c91391df3342c073322f56540b995807b7c763d" exitCode=0 Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.498398 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" event={"ID":"5915d8a1-8561-481b-990d-60cd35f30d7c","Type":"ContainerDied","Data":"02ce8716faf717215c1eeb2a1c91391df3342c073322f56540b995807b7c763d"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.501808 5050 generic.go:334] "Generic (PLEG): container finished" podID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" containerID="d7e42e990addfea469ba8301e391604ba8e1c28e0d658214f83f9a6ea75a3b23" exitCode=0 Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.501891 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnvhs" event={"ID":"29fd7267-f00e-4b58-bdab-55bf2d0c801c","Type":"ContainerDied","Data":"d7e42e990addfea469ba8301e391604ba8e1c28e0d658214f83f9a6ea75a3b23"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.501918 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnvhs" event={"ID":"29fd7267-f00e-4b58-bdab-55bf2d0c801c","Type":"ContainerStarted","Data":"bee6f59ed3676acee985afa4695b4a610d1f11451ee2beea7d5d22c2d5aedf73"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.510172 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfttr" event={"ID":"7c0f8d83-483d-499f-9fbc-c11768d3e97e","Type":"ContainerStarted","Data":"7098bf69519db9ada10910857bb7488b1c08d79c2c4b77e60f24e5af3166c3f3"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.511126 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a7a356c0-6077-4b98-bebd-c617757e4124","Type":"ContainerStarted","Data":"c56b18eb4542c0d47f89724947e517c914a43bf8a992c652c9fabc85a72e57fd"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.512153 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7c744e80cfba71549c5c0ab76099bf4e89e96f53add7773f320527c9fa046d54"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.512181 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"52b56cf5b46d01e8a2eabe8c619ad171ced113ac491de5188d8700bb4731a169"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.512341 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.515786 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" event={"ID":"82582675-89e4-4783-84df-ea11774c62aa","Type":"ContainerStarted","Data":"85d170fe087a0e766b6377cee8de77dcbc58bdc7c7c7c5e7671e4a3c2c99dd32"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.515816 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" event={"ID":"82582675-89e4-4783-84df-ea11774c62aa","Type":"ContainerStarted","Data":"f51d4b29ad21bf3969fbfa49147f5ed5deebf0cb63aeedcaf6f0709df8b01fed"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.516311 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.519255 5050 generic.go:334] "Generic (PLEG): container finished" podID="f2a80941-a665-4ff2-8f03-841e88b654cc" containerID="42d37073af3f53fcd436d261c86e93640e43a216dd5a8b8cbbbd8d4e35d570c7" exitCode=0 Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.520002 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdgsp" event={"ID":"f2a80941-a665-4ff2-8f03-841e88b654cc","Type":"ContainerDied","Data":"42d37073af3f53fcd436d261c86e93640e43a216dd5a8b8cbbbd8d4e35d570c7"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.520028 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdgsp" event={"ID":"f2a80941-a665-4ff2-8f03-841e88b654cc","Type":"ContainerStarted","Data":"4aaa32bfc18b362fcf86d3ce2fedc10a9087611062f87ea4e7f7074cce04d03c"} Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.617124 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" podStartSLOduration=129.617108542 podStartE2EDuration="2m9.617108542s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:23:45.616457898 +0000 UTC m=+150.665619494" watchObservedRunningTime="2026-01-31 05:23:45.617108542 +0000 UTC m=+150.666270128" Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.742789 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.816689 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m29pg"] Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.821501 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.823790 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m29pg"] Jan 31 05:23:45 crc kubenswrapper[5050]: I0131 05:23:45.824092 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.000621 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd09525-8724-4184-9311-f2dd52139a81-catalog-content\") pod \"redhat-marketplace-m29pg\" (UID: \"efd09525-8724-4184-9311-f2dd52139a81\") " pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.001178 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgmfq\" (UniqueName: \"kubernetes.io/projected/efd09525-8724-4184-9311-f2dd52139a81-kube-api-access-wgmfq\") pod \"redhat-marketplace-m29pg\" (UID: \"efd09525-8724-4184-9311-f2dd52139a81\") " pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.001264 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd09525-8724-4184-9311-f2dd52139a81-utilities\") pod \"redhat-marketplace-m29pg\" (UID: \"efd09525-8724-4184-9311-f2dd52139a81\") " pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.008929 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 05:23:46 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Jan 31 05:23:46 crc kubenswrapper[5050]: [+]process-running ok Jan 31 05:23:46 crc kubenswrapper[5050]: healthz check failed Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.009020 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.102454 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd09525-8724-4184-9311-f2dd52139a81-utilities\") pod \"redhat-marketplace-m29pg\" (UID: \"efd09525-8724-4184-9311-f2dd52139a81\") " pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.102534 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd09525-8724-4184-9311-f2dd52139a81-catalog-content\") pod \"redhat-marketplace-m29pg\" (UID: \"efd09525-8724-4184-9311-f2dd52139a81\") " pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.102559 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgmfq\" (UniqueName: \"kubernetes.io/projected/efd09525-8724-4184-9311-f2dd52139a81-kube-api-access-wgmfq\") pod \"redhat-marketplace-m29pg\" (UID: \"efd09525-8724-4184-9311-f2dd52139a81\") " pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.103079 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd09525-8724-4184-9311-f2dd52139a81-catalog-content\") pod \"redhat-marketplace-m29pg\" (UID: \"efd09525-8724-4184-9311-f2dd52139a81\") " pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.103342 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd09525-8724-4184-9311-f2dd52139a81-utilities\") pod \"redhat-marketplace-m29pg\" (UID: \"efd09525-8724-4184-9311-f2dd52139a81\") " pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.128095 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgmfq\" (UniqueName: \"kubernetes.io/projected/efd09525-8724-4184-9311-f2dd52139a81-kube-api-access-wgmfq\") pod \"redhat-marketplace-m29pg\" (UID: \"efd09525-8724-4184-9311-f2dd52139a81\") " pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.141376 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.219775 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xtlg6"] Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.220912 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.231348 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xtlg6"] Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.421484 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1340a566-94da-430a-abaa-2fa5eb25f675-utilities\") pod \"redhat-marketplace-xtlg6\" (UID: \"1340a566-94da-430a-abaa-2fa5eb25f675\") " pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.422494 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1340a566-94da-430a-abaa-2fa5eb25f675-catalog-content\") pod \"redhat-marketplace-xtlg6\" (UID: \"1340a566-94da-430a-abaa-2fa5eb25f675\") " pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.422567 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlnlf\" (UniqueName: \"kubernetes.io/projected/1340a566-94da-430a-abaa-2fa5eb25f675-kube-api-access-hlnlf\") pod \"redhat-marketplace-xtlg6\" (UID: \"1340a566-94da-430a-abaa-2fa5eb25f675\") " pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.524037 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlnlf\" (UniqueName: \"kubernetes.io/projected/1340a566-94da-430a-abaa-2fa5eb25f675-kube-api-access-hlnlf\") pod \"redhat-marketplace-xtlg6\" (UID: \"1340a566-94da-430a-abaa-2fa5eb25f675\") " pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.524160 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1340a566-94da-430a-abaa-2fa5eb25f675-utilities\") pod \"redhat-marketplace-xtlg6\" (UID: \"1340a566-94da-430a-abaa-2fa5eb25f675\") " pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.524182 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1340a566-94da-430a-abaa-2fa5eb25f675-catalog-content\") pod \"redhat-marketplace-xtlg6\" (UID: \"1340a566-94da-430a-abaa-2fa5eb25f675\") " pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.524971 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1340a566-94da-430a-abaa-2fa5eb25f675-catalog-content\") pod \"redhat-marketplace-xtlg6\" (UID: \"1340a566-94da-430a-abaa-2fa5eb25f675\") " pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.525213 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1340a566-94da-430a-abaa-2fa5eb25f675-utilities\") pod \"redhat-marketplace-xtlg6\" (UID: \"1340a566-94da-430a-abaa-2fa5eb25f675\") " pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.544760 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlnlf\" (UniqueName: \"kubernetes.io/projected/1340a566-94da-430a-abaa-2fa5eb25f675-kube-api-access-hlnlf\") pod \"redhat-marketplace-xtlg6\" (UID: \"1340a566-94da-430a-abaa-2fa5eb25f675\") " pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.546466 5050 generic.go:334] "Generic (PLEG): container finished" podID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" containerID="d46ac469eab67f1ac3dbffd9d77dfbddfeebfb2810d6613004a571a2adef8de2" exitCode=0 Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.546746 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfttr" event={"ID":"7c0f8d83-483d-499f-9fbc-c11768d3e97e","Type":"ContainerDied","Data":"d46ac469eab67f1ac3dbffd9d77dfbddfeebfb2810d6613004a571a2adef8de2"} Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.571761 5050 generic.go:334] "Generic (PLEG): container finished" podID="a7a356c0-6077-4b98-bebd-c617757e4124" containerID="90e69136881de8641ff275e9ac3c5787b425370cdf783ad7358769326314509f" exitCode=0 Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.572022 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a7a356c0-6077-4b98-bebd-c617757e4124","Type":"ContainerDied","Data":"90e69136881de8641ff275e9ac3c5787b425370cdf783ad7358769326314509f"} Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.583744 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.614498 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m29pg"] Jan 31 05:23:46 crc kubenswrapper[5050]: W0131 05:23:46.640941 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefd09525_8724_4184_9311_f2dd52139a81.slice/crio-ae3ecfec13045ade7b8bcd8ccd0af9b1c876eb394d78eb40951adcdd307c4443 WatchSource:0}: Error finding container ae3ecfec13045ade7b8bcd8ccd0af9b1c876eb394d78eb40951adcdd307c4443: Status 404 returned error can't find the container with id ae3ecfec13045ade7b8bcd8ccd0af9b1c876eb394d78eb40951adcdd307c4443 Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.820410 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qmfcw"] Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.829761 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.837038 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.856973 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qmfcw"] Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.909857 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.934663 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5915d8a1-8561-481b-990d-60cd35f30d7c-secret-volume\") pod \"5915d8a1-8561-481b-990d-60cd35f30d7c\" (UID: \"5915d8a1-8561-481b-990d-60cd35f30d7c\") " Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.934715 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkldv\" (UniqueName: \"kubernetes.io/projected/5915d8a1-8561-481b-990d-60cd35f30d7c-kube-api-access-tkldv\") pod \"5915d8a1-8561-481b-990d-60cd35f30d7c\" (UID: \"5915d8a1-8561-481b-990d-60cd35f30d7c\") " Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.934749 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5915d8a1-8561-481b-990d-60cd35f30d7c-config-volume\") pod \"5915d8a1-8561-481b-990d-60cd35f30d7c\" (UID: \"5915d8a1-8561-481b-990d-60cd35f30d7c\") " Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.934858 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ztzc\" (UniqueName: \"kubernetes.io/projected/1bdc621b-09b4-43de-921b-be2322174c79-kube-api-access-6ztzc\") pod \"redhat-operators-qmfcw\" (UID: \"1bdc621b-09b4-43de-921b-be2322174c79\") " pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.934932 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bdc621b-09b4-43de-921b-be2322174c79-catalog-content\") pod \"redhat-operators-qmfcw\" (UID: \"1bdc621b-09b4-43de-921b-be2322174c79\") " pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.934981 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bdc621b-09b4-43de-921b-be2322174c79-utilities\") pod \"redhat-operators-qmfcw\" (UID: \"1bdc621b-09b4-43de-921b-be2322174c79\") " pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.936734 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5915d8a1-8561-481b-990d-60cd35f30d7c-config-volume" (OuterVolumeSpecName: "config-volume") pod "5915d8a1-8561-481b-990d-60cd35f30d7c" (UID: "5915d8a1-8561-481b-990d-60cd35f30d7c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.948766 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5915d8a1-8561-481b-990d-60cd35f30d7c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5915d8a1-8561-481b-990d-60cd35f30d7c" (UID: "5915d8a1-8561-481b-990d-60cd35f30d7c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.951000 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5915d8a1-8561-481b-990d-60cd35f30d7c-kube-api-access-tkldv" (OuterVolumeSpecName: "kube-api-access-tkldv") pod "5915d8a1-8561-481b-990d-60cd35f30d7c" (UID: "5915d8a1-8561-481b-990d-60cd35f30d7c"). InnerVolumeSpecName "kube-api-access-tkldv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:23:46 crc kubenswrapper[5050]: I0131 05:23:46.985636 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xtlg6"] Jan 31 05:23:47 crc kubenswrapper[5050]: W0131 05:23:47.003141 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1340a566_94da_430a_abaa_2fa5eb25f675.slice/crio-cf3fc29aea79f40b5e27bed9777b9a3db699295d8b07e41275b9bdbd34c4d3e5 WatchSource:0}: Error finding container cf3fc29aea79f40b5e27bed9777b9a3db699295d8b07e41275b9bdbd34c4d3e5: Status 404 returned error can't find the container with id cf3fc29aea79f40b5e27bed9777b9a3db699295d8b07e41275b9bdbd34c4d3e5 Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.008407 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 05:23:47 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Jan 31 05:23:47 crc kubenswrapper[5050]: [+]process-running ok Jan 31 05:23:47 crc kubenswrapper[5050]: healthz check failed Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.008445 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.035335 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ztzc\" (UniqueName: \"kubernetes.io/projected/1bdc621b-09b4-43de-921b-be2322174c79-kube-api-access-6ztzc\") pod \"redhat-operators-qmfcw\" (UID: \"1bdc621b-09b4-43de-921b-be2322174c79\") " pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.035409 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bdc621b-09b4-43de-921b-be2322174c79-catalog-content\") pod \"redhat-operators-qmfcw\" (UID: \"1bdc621b-09b4-43de-921b-be2322174c79\") " pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.035446 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bdc621b-09b4-43de-921b-be2322174c79-utilities\") pod \"redhat-operators-qmfcw\" (UID: \"1bdc621b-09b4-43de-921b-be2322174c79\") " pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.035481 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5915d8a1-8561-481b-990d-60cd35f30d7c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.035492 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkldv\" (UniqueName: \"kubernetes.io/projected/5915d8a1-8561-481b-990d-60cd35f30d7c-kube-api-access-tkldv\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.035502 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5915d8a1-8561-481b-990d-60cd35f30d7c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.035927 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bdc621b-09b4-43de-921b-be2322174c79-utilities\") pod \"redhat-operators-qmfcw\" (UID: \"1bdc621b-09b4-43de-921b-be2322174c79\") " pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.035992 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bdc621b-09b4-43de-921b-be2322174c79-catalog-content\") pod \"redhat-operators-qmfcw\" (UID: \"1bdc621b-09b4-43de-921b-be2322174c79\") " pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.051751 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ztzc\" (UniqueName: \"kubernetes.io/projected/1bdc621b-09b4-43de-921b-be2322174c79-kube-api-access-6ztzc\") pod \"redhat-operators-qmfcw\" (UID: \"1bdc621b-09b4-43de-921b-be2322174c79\") " pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.154138 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.221252 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9lbxv"] Jan 31 05:23:47 crc kubenswrapper[5050]: E0131 05:23:47.221446 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5915d8a1-8561-481b-990d-60cd35f30d7c" containerName="collect-profiles" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.221461 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5915d8a1-8561-481b-990d-60cd35f30d7c" containerName="collect-profiles" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.221548 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5915d8a1-8561-481b-990d-60cd35f30d7c" containerName="collect-profiles" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.222220 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.236730 5050 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bmrg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.236802 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2bmrg" podUID="066f98b0-80a0-4cdd-ada3-76a1ebab23de" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.237093 5050 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bmrg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.237142 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2bmrg" podUID="066f98b0-80a0-4cdd-ada3-76a1ebab23de" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.240779 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lbxv"] Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.294825 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.294863 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.296332 5050 patch_prober.go:28] interesting pod/console-f9d7485db-fk4vq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.296389 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fk4vq" podUID="dab2d02c-8e81-40c5-a5ca-98be1833702e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.338548 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-utilities\") pod \"redhat-operators-9lbxv\" (UID: \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\") " pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.338617 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-catalog-content\") pod \"redhat-operators-9lbxv\" (UID: \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\") " pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.338648 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zz8s\" (UniqueName: \"kubernetes.io/projected/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-kube-api-access-6zz8s\") pod \"redhat-operators-9lbxv\" (UID: \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\") " pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.344427 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.349481 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-v7nml" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.446621 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qmfcw"] Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.451571 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-catalog-content\") pod \"redhat-operators-9lbxv\" (UID: \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\") " pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.451612 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zz8s\" (UniqueName: \"kubernetes.io/projected/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-kube-api-access-6zz8s\") pod \"redhat-operators-9lbxv\" (UID: \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\") " pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.451691 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-utilities\") pod \"redhat-operators-9lbxv\" (UID: \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\") " pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.452205 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-utilities\") pod \"redhat-operators-9lbxv\" (UID: \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\") " pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.452444 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-catalog-content\") pod \"redhat-operators-9lbxv\" (UID: \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\") " pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.472162 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zz8s\" (UniqueName: \"kubernetes.io/projected/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-kube-api-access-6zz8s\") pod \"redhat-operators-9lbxv\" (UID: \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\") " pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.578172 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmfcw" event={"ID":"1bdc621b-09b4-43de-921b-be2322174c79","Type":"ContainerStarted","Data":"83cabe1a0a54bc86068b86d1dfdc420d6442cb440d66ef984b92fa61c3485b7b"} Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.580787 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" event={"ID":"5915d8a1-8561-481b-990d-60cd35f30d7c","Type":"ContainerDied","Data":"f25268100efa96394f2d9e9a22ec15e5e660ca841a6d449e582bfc19ed1921e8"} Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.580833 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f25268100efa96394f2d9e9a22ec15e5e660ca841a6d449e582bfc19ed1921e8" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.580844 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.583005 5050 generic.go:334] "Generic (PLEG): container finished" podID="efd09525-8724-4184-9311-f2dd52139a81" containerID="6fd515980d0d6e5ace65369330fdaddc741c14d844482fee966397d0e34ee603" exitCode=0 Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.583068 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m29pg" event={"ID":"efd09525-8724-4184-9311-f2dd52139a81","Type":"ContainerDied","Data":"6fd515980d0d6e5ace65369330fdaddc741c14d844482fee966397d0e34ee603"} Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.583097 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m29pg" event={"ID":"efd09525-8724-4184-9311-f2dd52139a81","Type":"ContainerStarted","Data":"ae3ecfec13045ade7b8bcd8ccd0af9b1c876eb394d78eb40951adcdd307c4443"} Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.585363 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtlg6" event={"ID":"1340a566-94da-430a-abaa-2fa5eb25f675","Type":"ContainerDied","Data":"15f63ddeeb407c4dd0ecbf0208a558a9753622f76bfb3fd0989a64f21427747f"} Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.585255 5050 generic.go:334] "Generic (PLEG): container finished" podID="1340a566-94da-430a-abaa-2fa5eb25f675" containerID="15f63ddeeb407c4dd0ecbf0208a558a9753622f76bfb3fd0989a64f21427747f" exitCode=0 Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.585768 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtlg6" event={"ID":"1340a566-94da-430a-abaa-2fa5eb25f675","Type":"ContainerStarted","Data":"cf3fc29aea79f40b5e27bed9777b9a3db699295d8b07e41275b9bdbd34c4d3e5"} Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.603744 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.838176 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.844429 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lbxv"] Jan 31 05:23:47 crc kubenswrapper[5050]: W0131 05:23:47.871542 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90f89cbe_5e0c_4fdd_ae5f_fdb706620c72.slice/crio-a6bb376c421f612cdab31659d3582c06b3c71c0f9caaf1848cc70b10f185e0a0 WatchSource:0}: Error finding container a6bb376c421f612cdab31659d3582c06b3c71c0f9caaf1848cc70b10f185e0a0: Status 404 returned error can't find the container with id a6bb376c421f612cdab31659d3582c06b3c71c0f9caaf1848cc70b10f185e0a0 Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.964728 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7a356c0-6077-4b98-bebd-c617757e4124-kube-api-access\") pod \"a7a356c0-6077-4b98-bebd-c617757e4124\" (UID: \"a7a356c0-6077-4b98-bebd-c617757e4124\") " Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.964811 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7a356c0-6077-4b98-bebd-c617757e4124-kubelet-dir\") pod \"a7a356c0-6077-4b98-bebd-c617757e4124\" (UID: \"a7a356c0-6077-4b98-bebd-c617757e4124\") " Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.965149 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7a356c0-6077-4b98-bebd-c617757e4124-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a7a356c0-6077-4b98-bebd-c617757e4124" (UID: "a7a356c0-6077-4b98-bebd-c617757e4124"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:23:47 crc kubenswrapper[5050]: I0131 05:23:47.982186 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a356c0-6077-4b98-bebd-c617757e4124-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a7a356c0-6077-4b98-bebd-c617757e4124" (UID: "a7a356c0-6077-4b98-bebd-c617757e4124"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.014429 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.018003 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 05:23:48 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Jan 31 05:23:48 crc kubenswrapper[5050]: [+]process-running ok Jan 31 05:23:48 crc kubenswrapper[5050]: healthz check failed Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.018071 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.067527 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7a356c0-6077-4b98-bebd-c617757e4124-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.067704 5050 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7a356c0-6077-4b98-bebd-c617757e4124-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.596380 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a7a356c0-6077-4b98-bebd-c617757e4124","Type":"ContainerDied","Data":"c56b18eb4542c0d47f89724947e517c914a43bf8a992c652c9fabc85a72e57fd"} Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.596413 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.596427 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c56b18eb4542c0d47f89724947e517c914a43bf8a992c652c9fabc85a72e57fd" Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.603257 5050 generic.go:334] "Generic (PLEG): container finished" podID="1bdc621b-09b4-43de-921b-be2322174c79" containerID="700ebf9f5037d09f5829646bf087771efd191bffd04792ce9061cae280f95005" exitCode=0 Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.603327 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmfcw" event={"ID":"1bdc621b-09b4-43de-921b-be2322174c79","Type":"ContainerDied","Data":"700ebf9f5037d09f5829646bf087771efd191bffd04792ce9061cae280f95005"} Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.610409 5050 generic.go:334] "Generic (PLEG): container finished" podID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" containerID="d366b0cd089d200a873a61a5e118608705b5cd9640b9e1f639829f227e512755" exitCode=0 Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.610447 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lbxv" event={"ID":"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72","Type":"ContainerDied","Data":"d366b0cd089d200a873a61a5e118608705b5cd9640b9e1f639829f227e512755"} Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.610474 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lbxv" event={"ID":"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72","Type":"ContainerStarted","Data":"a6bb376c421f612cdab31659d3582c06b3c71c0f9caaf1848cc70b10f185e0a0"} Jan 31 05:23:48 crc kubenswrapper[5050]: I0131 05:23:48.688732 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:23:49 crc kubenswrapper[5050]: I0131 05:23:49.010532 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 05:23:49 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Jan 31 05:23:49 crc kubenswrapper[5050]: [+]process-running ok Jan 31 05:23:49 crc kubenswrapper[5050]: healthz check failed Jan 31 05:23:49 crc kubenswrapper[5050]: I0131 05:23:49.010591 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:23:50 crc kubenswrapper[5050]: I0131 05:23:50.007092 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 05:23:50 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Jan 31 05:23:50 crc kubenswrapper[5050]: [+]process-running ok Jan 31 05:23:50 crc kubenswrapper[5050]: healthz check failed Jan 31 05:23:50 crc kubenswrapper[5050]: I0131 05:23:50.007139 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.007747 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 05:23:51 crc kubenswrapper[5050]: [+]has-synced ok Jan 31 05:23:51 crc kubenswrapper[5050]: [+]process-running ok Jan 31 05:23:51 crc kubenswrapper[5050]: healthz check failed Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.007824 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.526366 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 05:23:51 crc kubenswrapper[5050]: E0131 05:23:51.528310 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a356c0-6077-4b98-bebd-c617757e4124" containerName="pruner" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.528326 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a356c0-6077-4b98-bebd-c617757e4124" containerName="pruner" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.528432 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a356c0-6077-4b98-bebd-c617757e4124" containerName="pruner" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.528778 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.530488 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.530690 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.558631 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.622462 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07223bb6-4730-45b3-8eb3-78cfc4cec433-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"07223bb6-4730-45b3-8eb3-78cfc4cec433\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.622537 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07223bb6-4730-45b3-8eb3-78cfc4cec433-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"07223bb6-4730-45b3-8eb3-78cfc4cec433\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.723809 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07223bb6-4730-45b3-8eb3-78cfc4cec433-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"07223bb6-4730-45b3-8eb3-78cfc4cec433\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.723892 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07223bb6-4730-45b3-8eb3-78cfc4cec433-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"07223bb6-4730-45b3-8eb3-78cfc4cec433\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.724111 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07223bb6-4730-45b3-8eb3-78cfc4cec433-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"07223bb6-4730-45b3-8eb3-78cfc4cec433\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.752970 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07223bb6-4730-45b3-8eb3-78cfc4cec433-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"07223bb6-4730-45b3-8eb3-78cfc4cec433\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 05:23:51 crc kubenswrapper[5050]: I0131 05:23:51.859765 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 05:23:52 crc kubenswrapper[5050]: I0131 05:23:52.008674 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:52 crc kubenswrapper[5050]: I0131 05:23:52.010582 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-87m8f" Jan 31 05:23:53 crc kubenswrapper[5050]: I0131 05:23:53.762015 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hfmnk" Jan 31 05:23:57 crc kubenswrapper[5050]: I0131 05:23:57.236808 5050 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bmrg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 05:23:57 crc kubenswrapper[5050]: I0131 05:23:57.236836 5050 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bmrg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 31 05:23:57 crc kubenswrapper[5050]: I0131 05:23:57.237843 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2bmrg" podUID="066f98b0-80a0-4cdd-ada3-76a1ebab23de" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 05:23:57 crc kubenswrapper[5050]: I0131 05:23:57.237927 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2bmrg" podUID="066f98b0-80a0-4cdd-ada3-76a1ebab23de" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 31 05:23:57 crc kubenswrapper[5050]: I0131 05:23:57.393175 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:57 crc kubenswrapper[5050]: I0131 05:23:57.404988 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:23:58 crc kubenswrapper[5050]: I0131 05:23:58.518337 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:58 crc kubenswrapper[5050]: I0131 05:23:58.531154 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e415fe7d-85f7-4a4f-8683-ffb3a0a8096d-metrics-certs\") pod \"network-metrics-daemon-ghk5r\" (UID: \"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d\") " pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:23:58 crc kubenswrapper[5050]: I0131 05:23:58.664824 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ghk5r" Jan 31 05:24:00 crc kubenswrapper[5050]: I0131 05:24:00.885100 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 05:24:04 crc kubenswrapper[5050]: I0131 05:24:04.601233 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:24:07 crc kubenswrapper[5050]: I0131 05:24:07.243127 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2bmrg" Jan 31 05:24:09 crc kubenswrapper[5050]: I0131 05:24:09.017873 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:24:09 crc kubenswrapper[5050]: I0131 05:24:09.017942 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:24:17 crc kubenswrapper[5050]: I0131 05:24:17.814803 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"07223bb6-4730-45b3-8eb3-78cfc4cec433","Type":"ContainerStarted","Data":"ada6a2303860d2b8a6345212b159401824f7d5a0ba02d9cf81d5128c4f55cc88"} Jan 31 05:24:18 crc kubenswrapper[5050]: I0131 05:24:18.730012 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wq7pt" Jan 31 05:24:22 crc kubenswrapper[5050]: E0131 05:24:22.143180 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 05:24:22 crc kubenswrapper[5050]: E0131 05:24:22.143390 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlnlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xtlg6_openshift-marketplace(1340a566-94da-430a-abaa-2fa5eb25f675): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 05:24:22 crc kubenswrapper[5050]: E0131 05:24:22.145863 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xtlg6" podUID="1340a566-94da-430a-abaa-2fa5eb25f675" Jan 31 05:24:23 crc kubenswrapper[5050]: I0131 05:24:23.993408 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 05:24:25 crc kubenswrapper[5050]: I0131 05:24:25.105862 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 05:24:25 crc kubenswrapper[5050]: I0131 05:24:25.107408 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 05:24:25 crc kubenswrapper[5050]: I0131 05:24:25.120312 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 05:24:25 crc kubenswrapper[5050]: I0131 05:24:25.217924 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22a04179-b0fd-4a93-801a-37fb0154d52f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22a04179-b0fd-4a93-801a-37fb0154d52f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 05:24:25 crc kubenswrapper[5050]: I0131 05:24:25.218040 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a04179-b0fd-4a93-801a-37fb0154d52f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22a04179-b0fd-4a93-801a-37fb0154d52f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 05:24:25 crc kubenswrapper[5050]: I0131 05:24:25.319258 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22a04179-b0fd-4a93-801a-37fb0154d52f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22a04179-b0fd-4a93-801a-37fb0154d52f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 05:24:25 crc kubenswrapper[5050]: I0131 05:24:25.319315 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a04179-b0fd-4a93-801a-37fb0154d52f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22a04179-b0fd-4a93-801a-37fb0154d52f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 05:24:25 crc kubenswrapper[5050]: I0131 05:24:25.319521 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22a04179-b0fd-4a93-801a-37fb0154d52f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22a04179-b0fd-4a93-801a-37fb0154d52f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 05:24:25 crc kubenswrapper[5050]: I0131 05:24:25.351334 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a04179-b0fd-4a93-801a-37fb0154d52f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22a04179-b0fd-4a93-801a-37fb0154d52f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 05:24:25 crc kubenswrapper[5050]: I0131 05:24:25.443629 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 05:24:28 crc kubenswrapper[5050]: E0131 05:24:28.190688 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 05:24:28 crc kubenswrapper[5050]: E0131 05:24:28.191166 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgmfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m29pg_openshift-marketplace(efd09525-8724-4184-9311-f2dd52139a81): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 05:24:28 crc kubenswrapper[5050]: E0131 05:24:28.192460 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-m29pg" podUID="efd09525-8724-4184-9311-f2dd52139a81" Jan 31 05:24:28 crc kubenswrapper[5050]: E0131 05:24:28.415262 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xtlg6" podUID="1340a566-94da-430a-abaa-2fa5eb25f675" Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.495796 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.496482 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.505872 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.620989 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/547e148c-16ac-498d-a6fc-1ef61b8d9501-kube-api-access\") pod \"installer-9-crc\" (UID: \"547e148c-16ac-498d-a6fc-1ef61b8d9501\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.621045 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/547e148c-16ac-498d-a6fc-1ef61b8d9501-var-lock\") pod \"installer-9-crc\" (UID: \"547e148c-16ac-498d-a6fc-1ef61b8d9501\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.621147 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/547e148c-16ac-498d-a6fc-1ef61b8d9501-kubelet-dir\") pod \"installer-9-crc\" (UID: \"547e148c-16ac-498d-a6fc-1ef61b8d9501\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.722356 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/547e148c-16ac-498d-a6fc-1ef61b8d9501-kubelet-dir\") pod \"installer-9-crc\" (UID: \"547e148c-16ac-498d-a6fc-1ef61b8d9501\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.722420 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/547e148c-16ac-498d-a6fc-1ef61b8d9501-kube-api-access\") pod \"installer-9-crc\" (UID: \"547e148c-16ac-498d-a6fc-1ef61b8d9501\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.722442 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/547e148c-16ac-498d-a6fc-1ef61b8d9501-var-lock\") pod \"installer-9-crc\" (UID: \"547e148c-16ac-498d-a6fc-1ef61b8d9501\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.722519 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/547e148c-16ac-498d-a6fc-1ef61b8d9501-kubelet-dir\") pod \"installer-9-crc\" (UID: \"547e148c-16ac-498d-a6fc-1ef61b8d9501\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.722534 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/547e148c-16ac-498d-a6fc-1ef61b8d9501-var-lock\") pod \"installer-9-crc\" (UID: \"547e148c-16ac-498d-a6fc-1ef61b8d9501\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.741722 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/547e148c-16ac-498d-a6fc-1ef61b8d9501-kube-api-access\") pod \"installer-9-crc\" (UID: \"547e148c-16ac-498d-a6fc-1ef61b8d9501\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:24:30 crc kubenswrapper[5050]: I0131 05:24:30.855574 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:24:32 crc kubenswrapper[5050]: E0131 05:24:32.310803 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 05:24:32 crc kubenswrapper[5050]: E0131 05:24:32.311041 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d9kn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gh8nk_openshift-marketplace(b775892b-5d01-4235-995f-5f38f01122ee): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 05:24:32 crc kubenswrapper[5050]: E0131 05:24:32.312335 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gh8nk" podUID="b775892b-5d01-4235-995f-5f38f01122ee" Jan 31 05:24:32 crc kubenswrapper[5050]: E0131 05:24:32.464706 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m29pg" podUID="efd09525-8724-4184-9311-f2dd52139a81" Jan 31 05:24:32 crc kubenswrapper[5050]: E0131 05:24:32.667055 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 05:24:32 crc kubenswrapper[5050]: E0131 05:24:32.667315 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjwwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mfttr_openshift-marketplace(7c0f8d83-483d-499f-9fbc-c11768d3e97e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 05:24:32 crc kubenswrapper[5050]: E0131 05:24:32.669084 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mfttr" podUID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" Jan 31 05:24:32 crc kubenswrapper[5050]: I0131 05:24:32.749438 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 05:24:32 crc kubenswrapper[5050]: I0131 05:24:32.796286 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 05:24:32 crc kubenswrapper[5050]: W0131 05:24:32.816497 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod22a04179_b0fd_4a93_801a_37fb0154d52f.slice/crio-78ae528da75b3e78fe6ca2ac73284b6f46d0464bd6364c294adb8319a0b8f5d1 WatchSource:0}: Error finding container 78ae528da75b3e78fe6ca2ac73284b6f46d0464bd6364c294adb8319a0b8f5d1: Status 404 returned error can't find the container with id 78ae528da75b3e78fe6ca2ac73284b6f46d0464bd6364c294adb8319a0b8f5d1 Jan 31 05:24:32 crc kubenswrapper[5050]: I0131 05:24:32.876306 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ghk5r"] Jan 31 05:24:32 crc kubenswrapper[5050]: W0131 05:24:32.884592 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode415fe7d_85f7_4a4f_8683_ffb3a0a8096d.slice/crio-fe2519aad16e15cc7a9f11f5436a22dae9f02dbeb4f3dff8d6d2b8a4cd9acaef WatchSource:0}: Error finding container fe2519aad16e15cc7a9f11f5436a22dae9f02dbeb4f3dff8d6d2b8a4cd9acaef: Status 404 returned error can't find the container with id fe2519aad16e15cc7a9f11f5436a22dae9f02dbeb4f3dff8d6d2b8a4cd9acaef Jan 31 05:24:32 crc kubenswrapper[5050]: I0131 05:24:32.920352 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"547e148c-16ac-498d-a6fc-1ef61b8d9501","Type":"ContainerStarted","Data":"0a555e22d6fa9db9e3be30e3df36df3511de84c4c49b04ec458cd9ceeca9005e"} Jan 31 05:24:32 crc kubenswrapper[5050]: I0131 05:24:32.924752 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" event={"ID":"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d","Type":"ContainerStarted","Data":"fe2519aad16e15cc7a9f11f5436a22dae9f02dbeb4f3dff8d6d2b8a4cd9acaef"} Jan 31 05:24:32 crc kubenswrapper[5050]: I0131 05:24:32.926802 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"22a04179-b0fd-4a93-801a-37fb0154d52f","Type":"ContainerStarted","Data":"78ae528da75b3e78fe6ca2ac73284b6f46d0464bd6364c294adb8319a0b8f5d1"} Jan 31 05:24:32 crc kubenswrapper[5050]: E0131 05:24:32.929110 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gh8nk" podUID="b775892b-5d01-4235-995f-5f38f01122ee" Jan 31 05:24:32 crc kubenswrapper[5050]: E0131 05:24:32.929359 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mfttr" podUID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" Jan 31 05:24:33 crc kubenswrapper[5050]: I0131 05:24:33.935319 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"07223bb6-4730-45b3-8eb3-78cfc4cec433","Type":"ContainerStarted","Data":"fb266ac9512b3b7b0f87068a081734640b227e23cff359f889ca293a56fa3947"} Jan 31 05:24:34 crc kubenswrapper[5050]: I0131 05:24:34.946158 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"547e148c-16ac-498d-a6fc-1ef61b8d9501","Type":"ContainerStarted","Data":"fca549162212da50e01267e87861dc32dcb896d589b08f1628c623ff7c5f01b4"} Jan 31 05:24:34 crc kubenswrapper[5050]: I0131 05:24:34.948405 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" event={"ID":"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d","Type":"ContainerStarted","Data":"8193e562a4399905d694528e207c2486607fe1785b97e17021ca75331f016ca6"} Jan 31 05:24:34 crc kubenswrapper[5050]: I0131 05:24:34.950572 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"22a04179-b0fd-4a93-801a-37fb0154d52f","Type":"ContainerStarted","Data":"49db251df52b350c7afc5292135f89246c67f208b1806d1a778b89aad382a2df"} Jan 31 05:24:35 crc kubenswrapper[5050]: I0131 05:24:35.957212 5050 generic.go:334] "Generic (PLEG): container finished" podID="07223bb6-4730-45b3-8eb3-78cfc4cec433" containerID="fb266ac9512b3b7b0f87068a081734640b227e23cff359f889ca293a56fa3947" exitCode=0 Jan 31 05:24:35 crc kubenswrapper[5050]: I0131 05:24:35.957303 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"07223bb6-4730-45b3-8eb3-78cfc4cec433","Type":"ContainerDied","Data":"fb266ac9512b3b7b0f87068a081734640b227e23cff359f889ca293a56fa3947"} Jan 31 05:24:35 crc kubenswrapper[5050]: I0131 05:24:35.993563 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.993536229 podStartE2EDuration="5.993536229s" podCreationTimestamp="2026-01-31 05:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:24:35.982252832 +0000 UTC m=+201.031414448" watchObservedRunningTime="2026-01-31 05:24:35.993536229 +0000 UTC m=+201.042697835" Jan 31 05:24:36 crc kubenswrapper[5050]: I0131 05:24:36.004997 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=11.004970611 podStartE2EDuration="11.004970611s" podCreationTimestamp="2026-01-31 05:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:24:36.001255429 +0000 UTC m=+201.050417035" watchObservedRunningTime="2026-01-31 05:24:36.004970611 +0000 UTC m=+201.054132207" Jan 31 05:24:37 crc kubenswrapper[5050]: E0131 05:24:37.381880 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 05:24:37 crc kubenswrapper[5050]: E0131 05:24:37.382329 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmkht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zdgsp_openshift-marketplace(f2a80941-a665-4ff2-8f03-841e88b654cc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 05:24:37 crc kubenswrapper[5050]: E0131 05:24:37.383538 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zdgsp" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" Jan 31 05:24:39 crc kubenswrapper[5050]: I0131 05:24:39.017361 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:24:39 crc kubenswrapper[5050]: I0131 05:24:39.017791 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:24:39 crc kubenswrapper[5050]: I0131 05:24:39.017838 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:24:39 crc kubenswrapper[5050]: I0131 05:24:39.018402 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:24:39 crc kubenswrapper[5050]: I0131 05:24:39.018492 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89" gracePeriod=600 Jan 31 05:24:39 crc kubenswrapper[5050]: E0131 05:24:39.437759 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 05:24:39 crc kubenswrapper[5050]: E0131 05:24:39.437985 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4hxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tnvhs_openshift-marketplace(29fd7267-f00e-4b58-bdab-55bf2d0c801c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 05:24:39 crc kubenswrapper[5050]: E0131 05:24:39.439173 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tnvhs" podUID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" Jan 31 05:24:40 crc kubenswrapper[5050]: E0131 05:24:40.363325 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tnvhs" podUID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" Jan 31 05:24:40 crc kubenswrapper[5050]: E0131 05:24:40.363756 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zdgsp" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.468056 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.473650 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07223bb6-4730-45b3-8eb3-78cfc4cec433-kubelet-dir\") pod \"07223bb6-4730-45b3-8eb3-78cfc4cec433\" (UID: \"07223bb6-4730-45b3-8eb3-78cfc4cec433\") " Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.473910 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07223bb6-4730-45b3-8eb3-78cfc4cec433-kube-api-access\") pod \"07223bb6-4730-45b3-8eb3-78cfc4cec433\" (UID: \"07223bb6-4730-45b3-8eb3-78cfc4cec433\") " Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.474044 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07223bb6-4730-45b3-8eb3-78cfc4cec433-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "07223bb6-4730-45b3-8eb3-78cfc4cec433" (UID: "07223bb6-4730-45b3-8eb3-78cfc4cec433"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.474330 5050 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07223bb6-4730-45b3-8eb3-78cfc4cec433-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.483444 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07223bb6-4730-45b3-8eb3-78cfc4cec433-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "07223bb6-4730-45b3-8eb3-78cfc4cec433" (UID: "07223bb6-4730-45b3-8eb3-78cfc4cec433"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.575657 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07223bb6-4730-45b3-8eb3-78cfc4cec433-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.987826 5050 generic.go:334] "Generic (PLEG): container finished" podID="22a04179-b0fd-4a93-801a-37fb0154d52f" containerID="49db251df52b350c7afc5292135f89246c67f208b1806d1a778b89aad382a2df" exitCode=0 Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.987984 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"22a04179-b0fd-4a93-801a-37fb0154d52f","Type":"ContainerDied","Data":"49db251df52b350c7afc5292135f89246c67f208b1806d1a778b89aad382a2df"} Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.991717 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"07223bb6-4730-45b3-8eb3-78cfc4cec433","Type":"ContainerDied","Data":"ada6a2303860d2b8a6345212b159401824f7d5a0ba02d9cf81d5128c4f55cc88"} Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.991783 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ada6a2303860d2b8a6345212b159401824f7d5a0ba02d9cf81d5128c4f55cc88" Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.991752 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.995448 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89" exitCode=0 Jan 31 05:24:40 crc kubenswrapper[5050]: I0131 05:24:40.995514 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89"} Jan 31 05:24:41 crc kubenswrapper[5050]: E0131 05:24:41.083264 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 05:24:41 crc kubenswrapper[5050]: E0131 05:24:41.083459 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zz8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9lbxv_openshift-marketplace(90f89cbe-5e0c-4fdd-ae5f-fdb706620c72): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 05:24:41 crc kubenswrapper[5050]: E0131 05:24:41.084902 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9lbxv" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" Jan 31 05:24:42 crc kubenswrapper[5050]: I0131 05:24:42.002552 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"f169ed087ec5dc88ea90cd249e2934f2701dee31413e8924bdbf46d544a5a4f8"} Jan 31 05:24:42 crc kubenswrapper[5050]: I0131 05:24:42.004977 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ghk5r" event={"ID":"e415fe7d-85f7-4a4f-8683-ffb3a0a8096d","Type":"ContainerStarted","Data":"c3fd3a9ed273e2c646a30fd4efab0d39f6286343ce5aaad0ea33b90652215a66"} Jan 31 05:24:42 crc kubenswrapper[5050]: E0131 05:24:42.006936 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9lbxv" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" Jan 31 05:24:42 crc kubenswrapper[5050]: I0131 05:24:42.283137 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 05:24:42 crc kubenswrapper[5050]: I0131 05:24:42.296993 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22a04179-b0fd-4a93-801a-37fb0154d52f-kubelet-dir\") pod \"22a04179-b0fd-4a93-801a-37fb0154d52f\" (UID: \"22a04179-b0fd-4a93-801a-37fb0154d52f\") " Jan 31 05:24:42 crc kubenswrapper[5050]: I0131 05:24:42.297057 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a04179-b0fd-4a93-801a-37fb0154d52f-kube-api-access\") pod \"22a04179-b0fd-4a93-801a-37fb0154d52f\" (UID: \"22a04179-b0fd-4a93-801a-37fb0154d52f\") " Jan 31 05:24:42 crc kubenswrapper[5050]: I0131 05:24:42.297798 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22a04179-b0fd-4a93-801a-37fb0154d52f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "22a04179-b0fd-4a93-801a-37fb0154d52f" (UID: "22a04179-b0fd-4a93-801a-37fb0154d52f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:24:42 crc kubenswrapper[5050]: I0131 05:24:42.309461 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22a04179-b0fd-4a93-801a-37fb0154d52f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "22a04179-b0fd-4a93-801a-37fb0154d52f" (UID: "22a04179-b0fd-4a93-801a-37fb0154d52f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:24:42 crc kubenswrapper[5050]: I0131 05:24:42.398275 5050 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22a04179-b0fd-4a93-801a-37fb0154d52f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 05:24:42 crc kubenswrapper[5050]: I0131 05:24:42.398608 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a04179-b0fd-4a93-801a-37fb0154d52f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 05:24:43 crc kubenswrapper[5050]: I0131 05:24:43.014551 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"22a04179-b0fd-4a93-801a-37fb0154d52f","Type":"ContainerDied","Data":"78ae528da75b3e78fe6ca2ac73284b6f46d0464bd6364c294adb8319a0b8f5d1"} Jan 31 05:24:43 crc kubenswrapper[5050]: I0131 05:24:43.014591 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78ae528da75b3e78fe6ca2ac73284b6f46d0464bd6364c294adb8319a0b8f5d1" Jan 31 05:24:43 crc kubenswrapper[5050]: I0131 05:24:43.014619 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 05:24:44 crc kubenswrapper[5050]: I0131 05:24:44.043861 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-ghk5r" podStartSLOduration=188.04383776 podStartE2EDuration="3m8.04383776s" podCreationTimestamp="2026-01-31 05:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:24:44.043765668 +0000 UTC m=+209.092927304" watchObservedRunningTime="2026-01-31 05:24:44.04383776 +0000 UTC m=+209.092999386" Jan 31 05:24:44 crc kubenswrapper[5050]: E0131 05:24:44.852990 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 05:24:44 crc kubenswrapper[5050]: E0131 05:24:44.853377 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ztzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qmfcw_openshift-marketplace(1bdc621b-09b4-43de-921b-be2322174c79): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 05:24:44 crc kubenswrapper[5050]: E0131 05:24:44.855131 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qmfcw" podUID="1bdc621b-09b4-43de-921b-be2322174c79" Jan 31 05:24:45 crc kubenswrapper[5050]: E0131 05:24:45.036926 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qmfcw" podUID="1bdc621b-09b4-43de-921b-be2322174c79" Jan 31 05:24:46 crc kubenswrapper[5050]: I0131 05:24:46.035794 5050 generic.go:334] "Generic (PLEG): container finished" podID="1340a566-94da-430a-abaa-2fa5eb25f675" containerID="76c760d14c7741188c38865f09a0ac92b21ea887b5e545fbe487369f0d277f8e" exitCode=0 Jan 31 05:24:46 crc kubenswrapper[5050]: I0131 05:24:46.035882 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtlg6" event={"ID":"1340a566-94da-430a-abaa-2fa5eb25f675","Type":"ContainerDied","Data":"76c760d14c7741188c38865f09a0ac92b21ea887b5e545fbe487369f0d277f8e"} Jan 31 05:24:47 crc kubenswrapper[5050]: I0131 05:24:47.042552 5050 generic.go:334] "Generic (PLEG): container finished" podID="efd09525-8724-4184-9311-f2dd52139a81" containerID="2923098640b4747de949fcb609515a97dba52cb5620a36f2f95f75f4c7d6fe47" exitCode=0 Jan 31 05:24:47 crc kubenswrapper[5050]: I0131 05:24:47.042616 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m29pg" event={"ID":"efd09525-8724-4184-9311-f2dd52139a81","Type":"ContainerDied","Data":"2923098640b4747de949fcb609515a97dba52cb5620a36f2f95f75f4c7d6fe47"} Jan 31 05:24:47 crc kubenswrapper[5050]: I0131 05:24:47.045694 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtlg6" event={"ID":"1340a566-94da-430a-abaa-2fa5eb25f675","Type":"ContainerStarted","Data":"474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3"} Jan 31 05:24:47 crc kubenswrapper[5050]: I0131 05:24:47.076219 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xtlg6" podStartSLOduration=2.088156449 podStartE2EDuration="1m1.076200929s" podCreationTimestamp="2026-01-31 05:23:46 +0000 UTC" firstStartedPulling="2026-01-31 05:23:47.587449945 +0000 UTC m=+152.636611541" lastFinishedPulling="2026-01-31 05:24:46.575494425 +0000 UTC m=+211.624656021" observedRunningTime="2026-01-31 05:24:47.072260722 +0000 UTC m=+212.121422318" watchObservedRunningTime="2026-01-31 05:24:47.076200929 +0000 UTC m=+212.125362525" Jan 31 05:24:48 crc kubenswrapper[5050]: I0131 05:24:48.052449 5050 generic.go:334] "Generic (PLEG): container finished" podID="b775892b-5d01-4235-995f-5f38f01122ee" containerID="abe8c8fdf0d06256a968324cd5f1b3ed91694e691619c4d8f4fd37643e865ee8" exitCode=0 Jan 31 05:24:48 crc kubenswrapper[5050]: I0131 05:24:48.052571 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh8nk" event={"ID":"b775892b-5d01-4235-995f-5f38f01122ee","Type":"ContainerDied","Data":"abe8c8fdf0d06256a968324cd5f1b3ed91694e691619c4d8f4fd37643e865ee8"} Jan 31 05:24:48 crc kubenswrapper[5050]: I0131 05:24:48.055937 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m29pg" event={"ID":"efd09525-8724-4184-9311-f2dd52139a81","Type":"ContainerStarted","Data":"7c0032ec02d6d5ab12f383ca9454e86cd7f6eef2c446af1a1b3d42f9f0079dcb"} Jan 31 05:24:48 crc kubenswrapper[5050]: I0131 05:24:48.112839 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m29pg" podStartSLOduration=3.25252164 podStartE2EDuration="1m3.112821167s" podCreationTimestamp="2026-01-31 05:23:45 +0000 UTC" firstStartedPulling="2026-01-31 05:23:47.584588336 +0000 UTC m=+152.633749932" lastFinishedPulling="2026-01-31 05:24:47.444887863 +0000 UTC m=+212.494049459" observedRunningTime="2026-01-31 05:24:48.110261718 +0000 UTC m=+213.159423334" watchObservedRunningTime="2026-01-31 05:24:48.112821167 +0000 UTC m=+213.161982763" Jan 31 05:24:49 crc kubenswrapper[5050]: I0131 05:24:49.062395 5050 generic.go:334] "Generic (PLEG): container finished" podID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" containerID="95897c00eccfde65243c80507c5844cfb1cc297f7544570f6d4ea97dcf7cf8ab" exitCode=0 Jan 31 05:24:49 crc kubenswrapper[5050]: I0131 05:24:49.062479 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfttr" event={"ID":"7c0f8d83-483d-499f-9fbc-c11768d3e97e","Type":"ContainerDied","Data":"95897c00eccfde65243c80507c5844cfb1cc297f7544570f6d4ea97dcf7cf8ab"} Jan 31 05:24:49 crc kubenswrapper[5050]: I0131 05:24:49.066191 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh8nk" event={"ID":"b775892b-5d01-4235-995f-5f38f01122ee","Type":"ContainerStarted","Data":"bd98a923ae4bab9fb9fcfadfaf9dbb002658dcd3a055088b633d52b84aff2232"} Jan 31 05:24:49 crc kubenswrapper[5050]: I0131 05:24:49.116787 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gh8nk" podStartSLOduration=2.110211788 podStartE2EDuration="1m5.116772233s" podCreationTimestamp="2026-01-31 05:23:44 +0000 UTC" firstStartedPulling="2026-01-31 05:23:45.493622535 +0000 UTC m=+150.542784131" lastFinishedPulling="2026-01-31 05:24:48.50018299 +0000 UTC m=+213.549344576" observedRunningTime="2026-01-31 05:24:49.113930166 +0000 UTC m=+214.163092062" watchObservedRunningTime="2026-01-31 05:24:49.116772233 +0000 UTC m=+214.165933829" Jan 31 05:24:50 crc kubenswrapper[5050]: I0131 05:24:50.075568 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfttr" event={"ID":"7c0f8d83-483d-499f-9fbc-c11768d3e97e","Type":"ContainerStarted","Data":"834ca1dd021e7580f99c4556a4575fd4ff3ef137519bb6e7b2abe783767af5d3"} Jan 31 05:24:50 crc kubenswrapper[5050]: I0131 05:24:50.110778 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mfttr" podStartSLOduration=3.1842858019999998 podStartE2EDuration="1m6.110753888s" podCreationTimestamp="2026-01-31 05:23:44 +0000 UTC" firstStartedPulling="2026-01-31 05:23:46.553370658 +0000 UTC m=+151.602532254" lastFinishedPulling="2026-01-31 05:24:49.479838744 +0000 UTC m=+214.529000340" observedRunningTime="2026-01-31 05:24:50.107542151 +0000 UTC m=+215.156703767" watchObservedRunningTime="2026-01-31 05:24:50.110753888 +0000 UTC m=+215.159915494" Jan 31 05:24:54 crc kubenswrapper[5050]: I0131 05:24:54.372083 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:24:54 crc kubenswrapper[5050]: I0131 05:24:54.372807 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:24:54 crc kubenswrapper[5050]: I0131 05:24:54.640419 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:24:54 crc kubenswrapper[5050]: I0131 05:24:54.640454 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:24:54 crc kubenswrapper[5050]: I0131 05:24:54.653135 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:24:54 crc kubenswrapper[5050]: I0131 05:24:54.687199 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:24:55 crc kubenswrapper[5050]: I0131 05:24:55.107844 5050 generic.go:334] "Generic (PLEG): container finished" podID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" containerID="d9a92b1628de778d4f0138f718695a12753aa13bd010169fbf6ada1e82334518" exitCode=0 Jan 31 05:24:55 crc kubenswrapper[5050]: I0131 05:24:55.108729 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnvhs" event={"ID":"29fd7267-f00e-4b58-bdab-55bf2d0c801c","Type":"ContainerDied","Data":"d9a92b1628de778d4f0138f718695a12753aa13bd010169fbf6ada1e82334518"} Jan 31 05:24:55 crc kubenswrapper[5050]: I0131 05:24:55.165745 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:24:55 crc kubenswrapper[5050]: I0131 05:24:55.167169 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:24:55 crc kubenswrapper[5050]: I0131 05:24:55.670521 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gh8nk"] Jan 31 05:24:56 crc kubenswrapper[5050]: I0131 05:24:56.116930 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnvhs" event={"ID":"29fd7267-f00e-4b58-bdab-55bf2d0c801c","Type":"ContainerStarted","Data":"8a717fc578b95a9f6518121fda39ad508f76dbcc14a8531d8cc20d5a7770e036"} Jan 31 05:24:56 crc kubenswrapper[5050]: I0131 05:24:56.140574 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tnvhs" podStartSLOduration=3.12254102 podStartE2EDuration="1m13.140548454s" podCreationTimestamp="2026-01-31 05:23:43 +0000 UTC" firstStartedPulling="2026-01-31 05:23:45.504284721 +0000 UTC m=+150.553446307" lastFinishedPulling="2026-01-31 05:24:55.522292145 +0000 UTC m=+220.571453741" observedRunningTime="2026-01-31 05:24:56.136566776 +0000 UTC m=+221.185728422" watchObservedRunningTime="2026-01-31 05:24:56.140548454 +0000 UTC m=+221.189710100" Jan 31 05:24:56 crc kubenswrapper[5050]: I0131 05:24:56.141859 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:24:56 crc kubenswrapper[5050]: I0131 05:24:56.141995 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:24:56 crc kubenswrapper[5050]: I0131 05:24:56.187037 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:24:56 crc kubenswrapper[5050]: I0131 05:24:56.584552 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:24:56 crc kubenswrapper[5050]: I0131 05:24:56.584887 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:24:56 crc kubenswrapper[5050]: I0131 05:24:56.634558 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:24:57 crc kubenswrapper[5050]: I0131 05:24:57.068376 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mfttr"] Jan 31 05:24:57 crc kubenswrapper[5050]: I0131 05:24:57.121890 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gh8nk" podUID="b775892b-5d01-4235-995f-5f38f01122ee" containerName="registry-server" containerID="cri-o://bd98a923ae4bab9fb9fcfadfaf9dbb002658dcd3a055088b633d52b84aff2232" gracePeriod=2 Jan 31 05:24:57 crc kubenswrapper[5050]: I0131 05:24:57.124027 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mfttr" podUID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" containerName="registry-server" containerID="cri-o://834ca1dd021e7580f99c4556a4575fd4ff3ef137519bb6e7b2abe783767af5d3" gracePeriod=2 Jan 31 05:24:57 crc kubenswrapper[5050]: I0131 05:24:57.171936 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:24:57 crc kubenswrapper[5050]: I0131 05:24:57.181671 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.129542 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdgsp" event={"ID":"f2a80941-a665-4ff2-8f03-841e88b654cc","Type":"ContainerStarted","Data":"bf11b6c771e869b3a60307470bbaaa28f8dd6f44ed4ec1cdd0007f8c85121ccc"} Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.133724 5050 generic.go:334] "Generic (PLEG): container finished" podID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" containerID="834ca1dd021e7580f99c4556a4575fd4ff3ef137519bb6e7b2abe783767af5d3" exitCode=0 Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.133814 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfttr" event={"ID":"7c0f8d83-483d-499f-9fbc-c11768d3e97e","Type":"ContainerDied","Data":"834ca1dd021e7580f99c4556a4575fd4ff3ef137519bb6e7b2abe783767af5d3"} Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.149438 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lbxv" event={"ID":"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72","Type":"ContainerStarted","Data":"3b2a3d4f22b16129da859c90e8c48df0bf96ebcae65633250fda7092b5a515c9"} Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.156132 5050 generic.go:334] "Generic (PLEG): container finished" podID="b775892b-5d01-4235-995f-5f38f01122ee" containerID="bd98a923ae4bab9fb9fcfadfaf9dbb002658dcd3a055088b633d52b84aff2232" exitCode=0 Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.156668 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh8nk" event={"ID":"b775892b-5d01-4235-995f-5f38f01122ee","Type":"ContainerDied","Data":"bd98a923ae4bab9fb9fcfadfaf9dbb002658dcd3a055088b633d52b84aff2232"} Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.319325 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.334242 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.355233 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0f8d83-483d-499f-9fbc-c11768d3e97e-catalog-content\") pod \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\" (UID: \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\") " Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.355281 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjwwc\" (UniqueName: \"kubernetes.io/projected/7c0f8d83-483d-499f-9fbc-c11768d3e97e-kube-api-access-kjwwc\") pod \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\" (UID: \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\") " Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.355343 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b775892b-5d01-4235-995f-5f38f01122ee-utilities\") pod \"b775892b-5d01-4235-995f-5f38f01122ee\" (UID: \"b775892b-5d01-4235-995f-5f38f01122ee\") " Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.355376 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d9kn\" (UniqueName: \"kubernetes.io/projected/b775892b-5d01-4235-995f-5f38f01122ee-kube-api-access-8d9kn\") pod \"b775892b-5d01-4235-995f-5f38f01122ee\" (UID: \"b775892b-5d01-4235-995f-5f38f01122ee\") " Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.355401 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b775892b-5d01-4235-995f-5f38f01122ee-catalog-content\") pod \"b775892b-5d01-4235-995f-5f38f01122ee\" (UID: \"b775892b-5d01-4235-995f-5f38f01122ee\") " Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.355453 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0f8d83-483d-499f-9fbc-c11768d3e97e-utilities\") pod \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\" (UID: \"7c0f8d83-483d-499f-9fbc-c11768d3e97e\") " Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.357235 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b775892b-5d01-4235-995f-5f38f01122ee-utilities" (OuterVolumeSpecName: "utilities") pod "b775892b-5d01-4235-995f-5f38f01122ee" (UID: "b775892b-5d01-4235-995f-5f38f01122ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.359046 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0f8d83-483d-499f-9fbc-c11768d3e97e-utilities" (OuterVolumeSpecName: "utilities") pod "7c0f8d83-483d-499f-9fbc-c11768d3e97e" (UID: "7c0f8d83-483d-499f-9fbc-c11768d3e97e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.361764 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0f8d83-483d-499f-9fbc-c11768d3e97e-kube-api-access-kjwwc" (OuterVolumeSpecName: "kube-api-access-kjwwc") pod "7c0f8d83-483d-499f-9fbc-c11768d3e97e" (UID: "7c0f8d83-483d-499f-9fbc-c11768d3e97e"). InnerVolumeSpecName "kube-api-access-kjwwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.362184 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b775892b-5d01-4235-995f-5f38f01122ee-kube-api-access-8d9kn" (OuterVolumeSpecName: "kube-api-access-8d9kn") pod "b775892b-5d01-4235-995f-5f38f01122ee" (UID: "b775892b-5d01-4235-995f-5f38f01122ee"). InnerVolumeSpecName "kube-api-access-8d9kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.399721 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0f8d83-483d-499f-9fbc-c11768d3e97e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c0f8d83-483d-499f-9fbc-c11768d3e97e" (UID: "7c0f8d83-483d-499f-9fbc-c11768d3e97e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.456896 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjwwc\" (UniqueName: \"kubernetes.io/projected/7c0f8d83-483d-499f-9fbc-c11768d3e97e-kube-api-access-kjwwc\") on node \"crc\" DevicePath \"\"" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.456935 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b775892b-5d01-4235-995f-5f38f01122ee-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.456945 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d9kn\" (UniqueName: \"kubernetes.io/projected/b775892b-5d01-4235-995f-5f38f01122ee-kube-api-access-8d9kn\") on node \"crc\" DevicePath \"\"" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.456969 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0f8d83-483d-499f-9fbc-c11768d3e97e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.456977 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0f8d83-483d-499f-9fbc-c11768d3e97e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.825486 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b775892b-5d01-4235-995f-5f38f01122ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b775892b-5d01-4235-995f-5f38f01122ee" (UID: "b775892b-5d01-4235-995f-5f38f01122ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:24:58 crc kubenswrapper[5050]: I0131 05:24:58.861390 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b775892b-5d01-4235-995f-5f38f01122ee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.164191 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmfcw" event={"ID":"1bdc621b-09b4-43de-921b-be2322174c79","Type":"ContainerStarted","Data":"6d3ceb67ef6737fe037fbf1ee7db6ef47975a8d01a750b50207f35671cf706fe"} Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.165795 5050 generic.go:334] "Generic (PLEG): container finished" podID="f2a80941-a665-4ff2-8f03-841e88b654cc" containerID="bf11b6c771e869b3a60307470bbaaa28f8dd6f44ed4ec1cdd0007f8c85121ccc" exitCode=0 Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.165855 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdgsp" event={"ID":"f2a80941-a665-4ff2-8f03-841e88b654cc","Type":"ContainerDied","Data":"bf11b6c771e869b3a60307470bbaaa28f8dd6f44ed4ec1cdd0007f8c85121ccc"} Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.172467 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfttr" event={"ID":"7c0f8d83-483d-499f-9fbc-c11768d3e97e","Type":"ContainerDied","Data":"7098bf69519db9ada10910857bb7488b1c08d79c2c4b77e60f24e5af3166c3f3"} Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.172474 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfttr" Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.172533 5050 scope.go:117] "RemoveContainer" containerID="834ca1dd021e7580f99c4556a4575fd4ff3ef137519bb6e7b2abe783767af5d3" Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.177660 5050 generic.go:334] "Generic (PLEG): container finished" podID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" containerID="3b2a3d4f22b16129da859c90e8c48df0bf96ebcae65633250fda7092b5a515c9" exitCode=0 Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.177744 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lbxv" event={"ID":"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72","Type":"ContainerDied","Data":"3b2a3d4f22b16129da859c90e8c48df0bf96ebcae65633250fda7092b5a515c9"} Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.180536 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gh8nk" Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.191081 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh8nk" event={"ID":"b775892b-5d01-4235-995f-5f38f01122ee","Type":"ContainerDied","Data":"fba07c56d09c6713cf43fcfe996c0521bffdbcf85bb505493f5110dfb1e20e36"} Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.200760 5050 scope.go:117] "RemoveContainer" containerID="95897c00eccfde65243c80507c5844cfb1cc297f7544570f6d4ea97dcf7cf8ab" Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.216388 5050 scope.go:117] "RemoveContainer" containerID="d46ac469eab67f1ac3dbffd9d77dfbddfeebfb2810d6613004a571a2adef8de2" Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.245680 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gh8nk"] Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.250827 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gh8nk"] Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.263393 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mfttr"] Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.267971 5050 scope.go:117] "RemoveContainer" containerID="bd98a923ae4bab9fb9fcfadfaf9dbb002658dcd3a055088b633d52b84aff2232" Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.267391 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mfttr"] Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.281545 5050 scope.go:117] "RemoveContainer" containerID="abe8c8fdf0d06256a968324cd5f1b3ed91694e691619c4d8f4fd37643e865ee8" Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.302709 5050 scope.go:117] "RemoveContainer" containerID="4213ce4e6b5c2a78979df6d308cafc6a5d97b251eb2046aff6b0204a71be1212" Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.748569 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" path="/var/lib/kubelet/pods/7c0f8d83-483d-499f-9fbc-c11768d3e97e/volumes" Jan 31 05:24:59 crc kubenswrapper[5050]: I0131 05:24:59.749373 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b775892b-5d01-4235-995f-5f38f01122ee" path="/var/lib/kubelet/pods/b775892b-5d01-4235-995f-5f38f01122ee/volumes" Jan 31 05:25:00 crc kubenswrapper[5050]: I0131 05:25:00.188478 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdgsp" event={"ID":"f2a80941-a665-4ff2-8f03-841e88b654cc","Type":"ContainerStarted","Data":"db5bec45b02d2153a7e8f5d1eb6102de1911bda4b253206ab36ca4f92df33af3"} Jan 31 05:25:00 crc kubenswrapper[5050]: I0131 05:25:00.196814 5050 generic.go:334] "Generic (PLEG): container finished" podID="1bdc621b-09b4-43de-921b-be2322174c79" containerID="6d3ceb67ef6737fe037fbf1ee7db6ef47975a8d01a750b50207f35671cf706fe" exitCode=0 Jan 31 05:25:00 crc kubenswrapper[5050]: I0131 05:25:00.196847 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmfcw" event={"ID":"1bdc621b-09b4-43de-921b-be2322174c79","Type":"ContainerDied","Data":"6d3ceb67ef6737fe037fbf1ee7db6ef47975a8d01a750b50207f35671cf706fe"} Jan 31 05:25:00 crc kubenswrapper[5050]: I0131 05:25:00.211559 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zdgsp" podStartSLOduration=2.768791174 podStartE2EDuration="1m17.211542545s" podCreationTimestamp="2026-01-31 05:23:43 +0000 UTC" firstStartedPulling="2026-01-31 05:23:45.520968456 +0000 UTC m=+150.570130052" lastFinishedPulling="2026-01-31 05:24:59.963719787 +0000 UTC m=+225.012881423" observedRunningTime="2026-01-31 05:25:00.209326275 +0000 UTC m=+225.258487871" watchObservedRunningTime="2026-01-31 05:25:00.211542545 +0000 UTC m=+225.260704131" Jan 31 05:25:00 crc kubenswrapper[5050]: I0131 05:25:00.469070 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xtlg6"] Jan 31 05:25:00 crc kubenswrapper[5050]: I0131 05:25:00.469334 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xtlg6" podUID="1340a566-94da-430a-abaa-2fa5eb25f675" containerName="registry-server" containerID="cri-o://474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3" gracePeriod=2 Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.107699 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.189299 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlnlf\" (UniqueName: \"kubernetes.io/projected/1340a566-94da-430a-abaa-2fa5eb25f675-kube-api-access-hlnlf\") pod \"1340a566-94da-430a-abaa-2fa5eb25f675\" (UID: \"1340a566-94da-430a-abaa-2fa5eb25f675\") " Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.199406 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1340a566-94da-430a-abaa-2fa5eb25f675-kube-api-access-hlnlf" (OuterVolumeSpecName: "kube-api-access-hlnlf") pod "1340a566-94da-430a-abaa-2fa5eb25f675" (UID: "1340a566-94da-430a-abaa-2fa5eb25f675"). InnerVolumeSpecName "kube-api-access-hlnlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.206021 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lbxv" event={"ID":"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72","Type":"ContainerStarted","Data":"6574cde87c8635c82d06a74e6d923efdd15da6e9e32aaa7373fd18f2384c24ff"} Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.209062 5050 generic.go:334] "Generic (PLEG): container finished" podID="1340a566-94da-430a-abaa-2fa5eb25f675" containerID="474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3" exitCode=0 Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.209092 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtlg6" event={"ID":"1340a566-94da-430a-abaa-2fa5eb25f675","Type":"ContainerDied","Data":"474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3"} Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.209114 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xtlg6" event={"ID":"1340a566-94da-430a-abaa-2fa5eb25f675","Type":"ContainerDied","Data":"cf3fc29aea79f40b5e27bed9777b9a3db699295d8b07e41275b9bdbd34c4d3e5"} Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.209129 5050 scope.go:117] "RemoveContainer" containerID="474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.209216 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xtlg6" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.237173 5050 scope.go:117] "RemoveContainer" containerID="76c760d14c7741188c38865f09a0ac92b21ea887b5e545fbe487369f0d277f8e" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.251137 5050 scope.go:117] "RemoveContainer" containerID="15f63ddeeb407c4dd0ecbf0208a558a9753622f76bfb3fd0989a64f21427747f" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.264964 5050 scope.go:117] "RemoveContainer" containerID="474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3" Jan 31 05:25:01 crc kubenswrapper[5050]: E0131 05:25:01.265519 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3\": container with ID starting with 474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3 not found: ID does not exist" containerID="474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.265572 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3"} err="failed to get container status \"474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3\": rpc error: code = NotFound desc = could not find container \"474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3\": container with ID starting with 474450d1eadac34dc38c5e4a224c5d8b7e6e286c95332ee05fc3008615db9dd3 not found: ID does not exist" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.265607 5050 scope.go:117] "RemoveContainer" containerID="76c760d14c7741188c38865f09a0ac92b21ea887b5e545fbe487369f0d277f8e" Jan 31 05:25:01 crc kubenswrapper[5050]: E0131 05:25:01.265968 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76c760d14c7741188c38865f09a0ac92b21ea887b5e545fbe487369f0d277f8e\": container with ID starting with 76c760d14c7741188c38865f09a0ac92b21ea887b5e545fbe487369f0d277f8e not found: ID does not exist" containerID="76c760d14c7741188c38865f09a0ac92b21ea887b5e545fbe487369f0d277f8e" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.266008 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c760d14c7741188c38865f09a0ac92b21ea887b5e545fbe487369f0d277f8e"} err="failed to get container status \"76c760d14c7741188c38865f09a0ac92b21ea887b5e545fbe487369f0d277f8e\": rpc error: code = NotFound desc = could not find container \"76c760d14c7741188c38865f09a0ac92b21ea887b5e545fbe487369f0d277f8e\": container with ID starting with 76c760d14c7741188c38865f09a0ac92b21ea887b5e545fbe487369f0d277f8e not found: ID does not exist" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.266031 5050 scope.go:117] "RemoveContainer" containerID="15f63ddeeb407c4dd0ecbf0208a558a9753622f76bfb3fd0989a64f21427747f" Jan 31 05:25:01 crc kubenswrapper[5050]: E0131 05:25:01.266247 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f63ddeeb407c4dd0ecbf0208a558a9753622f76bfb3fd0989a64f21427747f\": container with ID starting with 15f63ddeeb407c4dd0ecbf0208a558a9753622f76bfb3fd0989a64f21427747f not found: ID does not exist" containerID="15f63ddeeb407c4dd0ecbf0208a558a9753622f76bfb3fd0989a64f21427747f" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.266281 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f63ddeeb407c4dd0ecbf0208a558a9753622f76bfb3fd0989a64f21427747f"} err="failed to get container status \"15f63ddeeb407c4dd0ecbf0208a558a9753622f76bfb3fd0989a64f21427747f\": rpc error: code = NotFound desc = could not find container \"15f63ddeeb407c4dd0ecbf0208a558a9753622f76bfb3fd0989a64f21427747f\": container with ID starting with 15f63ddeeb407c4dd0ecbf0208a558a9753622f76bfb3fd0989a64f21427747f not found: ID does not exist" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.290330 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1340a566-94da-430a-abaa-2fa5eb25f675-catalog-content\") pod \"1340a566-94da-430a-abaa-2fa5eb25f675\" (UID: \"1340a566-94da-430a-abaa-2fa5eb25f675\") " Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.290586 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1340a566-94da-430a-abaa-2fa5eb25f675-utilities\") pod \"1340a566-94da-430a-abaa-2fa5eb25f675\" (UID: \"1340a566-94da-430a-abaa-2fa5eb25f675\") " Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.291002 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlnlf\" (UniqueName: \"kubernetes.io/projected/1340a566-94da-430a-abaa-2fa5eb25f675-kube-api-access-hlnlf\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.291323 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1340a566-94da-430a-abaa-2fa5eb25f675-utilities" (OuterVolumeSpecName: "utilities") pod "1340a566-94da-430a-abaa-2fa5eb25f675" (UID: "1340a566-94da-430a-abaa-2fa5eb25f675"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.318240 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1340a566-94da-430a-abaa-2fa5eb25f675-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1340a566-94da-430a-abaa-2fa5eb25f675" (UID: "1340a566-94da-430a-abaa-2fa5eb25f675"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.391586 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1340a566-94da-430a-abaa-2fa5eb25f675-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.391615 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1340a566-94da-430a-abaa-2fa5eb25f675-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.528182 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9lbxv" podStartSLOduration=3.077305893 podStartE2EDuration="1m14.528161509s" podCreationTimestamp="2026-01-31 05:23:47 +0000 UTC" firstStartedPulling="2026-01-31 05:23:48.612591231 +0000 UTC m=+153.661752827" lastFinishedPulling="2026-01-31 05:25:00.063446847 +0000 UTC m=+225.112608443" observedRunningTime="2026-01-31 05:25:01.224867378 +0000 UTC m=+226.274028984" watchObservedRunningTime="2026-01-31 05:25:01.528161509 +0000 UTC m=+226.577323105" Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.529874 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xtlg6"] Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.532216 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xtlg6"] Jan 31 05:25:01 crc kubenswrapper[5050]: I0131 05:25:01.741814 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1340a566-94da-430a-abaa-2fa5eb25f675" path="/var/lib/kubelet/pods/1340a566-94da-430a-abaa-2fa5eb25f675/volumes" Jan 31 05:25:02 crc kubenswrapper[5050]: I0131 05:25:02.215814 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmfcw" event={"ID":"1bdc621b-09b4-43de-921b-be2322174c79","Type":"ContainerStarted","Data":"43c87a05da20e71455c8bce95724b93579140af830ce4f57adbaad58c08d725a"} Jan 31 05:25:02 crc kubenswrapper[5050]: I0131 05:25:02.244036 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qmfcw" podStartSLOduration=3.861570094 podStartE2EDuration="1m16.244021979s" podCreationTimestamp="2026-01-31 05:23:46 +0000 UTC" firstStartedPulling="2026-01-31 05:23:48.606583882 +0000 UTC m=+153.655745478" lastFinishedPulling="2026-01-31 05:25:00.989035777 +0000 UTC m=+226.038197363" observedRunningTime="2026-01-31 05:25:02.243888205 +0000 UTC m=+227.293049841" watchObservedRunningTime="2026-01-31 05:25:02.244021979 +0000 UTC m=+227.293183575" Jan 31 05:25:03 crc kubenswrapper[5050]: I0131 05:25:03.960144 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:25:03 crc kubenswrapper[5050]: I0131 05:25:03.960199 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:25:04 crc kubenswrapper[5050]: I0131 05:25:04.006217 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:25:04 crc kubenswrapper[5050]: I0131 05:25:04.168295 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:25:04 crc kubenswrapper[5050]: I0131 05:25:04.168352 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:25:04 crc kubenswrapper[5050]: I0131 05:25:04.220919 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:25:04 crc kubenswrapper[5050]: I0131 05:25:04.269421 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:25:07 crc kubenswrapper[5050]: I0131 05:25:07.154316 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:25:07 crc kubenswrapper[5050]: I0131 05:25:07.154365 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:25:07 crc kubenswrapper[5050]: I0131 05:25:07.604481 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:25:07 crc kubenswrapper[5050]: I0131 05:25:07.604520 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:25:07 crc kubenswrapper[5050]: I0131 05:25:07.666081 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:25:07 crc kubenswrapper[5050]: I0131 05:25:07.980006 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ln492"] Jan 31 05:25:08 crc kubenswrapper[5050]: I0131 05:25:08.200032 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qmfcw" podUID="1bdc621b-09b4-43de-921b-be2322174c79" containerName="registry-server" probeResult="failure" output=< Jan 31 05:25:08 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 05:25:08 crc kubenswrapper[5050]: > Jan 31 05:25:08 crc kubenswrapper[5050]: I0131 05:25:08.286489 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:25:09 crc kubenswrapper[5050]: I0131 05:25:09.071354 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9lbxv"] Jan 31 05:25:10 crc kubenswrapper[5050]: I0131 05:25:10.266457 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9lbxv" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" containerName="registry-server" containerID="cri-o://6574cde87c8635c82d06a74e6d923efdd15da6e9e32aaa7373fd18f2384c24ff" gracePeriod=2 Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.072297 5050 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.073149 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" containerName="registry-server" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073181 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" containerName="registry-server" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.073208 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b775892b-5d01-4235-995f-5f38f01122ee" containerName="extract-content" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073224 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b775892b-5d01-4235-995f-5f38f01122ee" containerName="extract-content" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.073245 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a04179-b0fd-4a93-801a-37fb0154d52f" containerName="pruner" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073261 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a04179-b0fd-4a93-801a-37fb0154d52f" containerName="pruner" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.073280 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b775892b-5d01-4235-995f-5f38f01122ee" containerName="registry-server" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073295 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b775892b-5d01-4235-995f-5f38f01122ee" containerName="registry-server" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.073311 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" containerName="extract-utilities" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073326 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" containerName="extract-utilities" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.073357 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1340a566-94da-430a-abaa-2fa5eb25f675" containerName="extract-content" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073373 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1340a566-94da-430a-abaa-2fa5eb25f675" containerName="extract-content" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.073399 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" containerName="extract-content" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073413 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" containerName="extract-content" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.073433 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07223bb6-4730-45b3-8eb3-78cfc4cec433" containerName="pruner" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073449 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="07223bb6-4730-45b3-8eb3-78cfc4cec433" containerName="pruner" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.073472 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1340a566-94da-430a-abaa-2fa5eb25f675" containerName="extract-utilities" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073487 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1340a566-94da-430a-abaa-2fa5eb25f675" containerName="extract-utilities" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.073518 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1340a566-94da-430a-abaa-2fa5eb25f675" containerName="registry-server" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073531 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1340a566-94da-430a-abaa-2fa5eb25f675" containerName="registry-server" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.073551 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b775892b-5d01-4235-995f-5f38f01122ee" containerName="extract-utilities" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073566 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b775892b-5d01-4235-995f-5f38f01122ee" containerName="extract-utilities" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073836 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1340a566-94da-430a-abaa-2fa5eb25f675" containerName="registry-server" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073860 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b775892b-5d01-4235-995f-5f38f01122ee" containerName="registry-server" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073886 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0f8d83-483d-499f-9fbc-c11768d3e97e" containerName="registry-server" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073906 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="07223bb6-4730-45b3-8eb3-78cfc4cec433" containerName="pruner" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.073921 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a04179-b0fd-4a93-801a-37fb0154d52f" containerName="pruner" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.074686 5050 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.074930 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.075264 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace" gracePeriod=15 Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.075414 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c" gracePeriod=15 Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.075388 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb" gracePeriod=15 Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.075478 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa" gracePeriod=15 Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.075596 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc" gracePeriod=15 Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.077183 5050 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.077520 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.077547 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.077578 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.077595 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.077781 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.077805 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.077836 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.077853 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.077877 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.077893 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.078091 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.078201 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 05:25:12 crc kubenswrapper[5050]: E0131 05:25:12.078229 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.078245 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.078488 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.078517 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.078547 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.078567 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.078582 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.078620 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.243338 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.243516 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.243605 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.243626 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.243754 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.243819 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.243850 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.243874 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.291155 5050 generic.go:334] "Generic (PLEG): container finished" podID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" containerID="6574cde87c8635c82d06a74e6d923efdd15da6e9e32aaa7373fd18f2384c24ff" exitCode=0 Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.291243 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lbxv" event={"ID":"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72","Type":"ContainerDied","Data":"6574cde87c8635c82d06a74e6d923efdd15da6e9e32aaa7373fd18f2384c24ff"} Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.344661 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.344723 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345153 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345270 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345318 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345326 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345361 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345372 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345377 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345406 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345421 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345393 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345449 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345428 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345501 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.345590 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.792654 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.793649 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.794198 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.852580 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-catalog-content\") pod \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\" (UID: \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\") " Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.852659 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-utilities\") pod \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\" (UID: \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\") " Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.852756 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zz8s\" (UniqueName: \"kubernetes.io/projected/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-kube-api-access-6zz8s\") pod \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\" (UID: \"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72\") " Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.856185 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-utilities" (OuterVolumeSpecName: "utilities") pod "90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" (UID: "90f89cbe-5e0c-4fdd-ae5f-fdb706620c72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.861648 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-kube-api-access-6zz8s" (OuterVolumeSpecName: "kube-api-access-6zz8s") pod "90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" (UID: "90f89cbe-5e0c-4fdd-ae5f-fdb706620c72"). InnerVolumeSpecName "kube-api-access-6zz8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.955468 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:12 crc kubenswrapper[5050]: I0131 05:25:12.955531 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zz8s\" (UniqueName: \"kubernetes.io/projected/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-kube-api-access-6zz8s\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.299494 5050 generic.go:334] "Generic (PLEG): container finished" podID="547e148c-16ac-498d-a6fc-1ef61b8d9501" containerID="fca549162212da50e01267e87861dc32dcb896d589b08f1628c623ff7c5f01b4" exitCode=0 Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.299613 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"547e148c-16ac-498d-a6fc-1ef61b8d9501","Type":"ContainerDied","Data":"fca549162212da50e01267e87861dc32dcb896d589b08f1628c623ff7c5f01b4"} Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.300532 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.301068 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.301583 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.302858 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lbxv" event={"ID":"90f89cbe-5e0c-4fdd-ae5f-fdb706620c72","Type":"ContainerDied","Data":"a6bb376c421f612cdab31659d3582c06b3c71c0f9caaf1848cc70b10f185e0a0"} Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.302931 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lbxv" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.302934 5050 scope.go:117] "RemoveContainer" containerID="6574cde87c8635c82d06a74e6d923efdd15da6e9e32aaa7373fd18f2384c24ff" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.303713 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.304325 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.304844 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.307813 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.315631 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.316687 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c" exitCode=0 Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.316733 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb" exitCode=0 Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.316753 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa" exitCode=0 Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.316772 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc" exitCode=2 Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.328114 5050 scope.go:117] "RemoveContainer" containerID="3b2a3d4f22b16129da859c90e8c48df0bf96ebcae65633250fda7092b5a515c9" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.354993 5050 scope.go:117] "RemoveContainer" containerID="d366b0cd089d200a873a61a5e118608705b5cd9640b9e1f639829f227e512755" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.395124 5050 scope.go:117] "RemoveContainer" containerID="6ce6382f565edb593936af55981847e219136da8b3167eeef1845230de05f38e" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.480052 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" (UID: "90f89cbe-5e0c-4fdd-ae5f-fdb706620c72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.564804 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.616006 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.616449 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:13 crc kubenswrapper[5050]: I0131 05:25:13.617229 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.030455 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.031625 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.032190 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.032776 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.324905 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.461839 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.463095 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.463818 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.464016 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.464179 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.464346 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.484315 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.484393 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.484420 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.484526 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.484548 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.484635 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.530549 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.531002 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.531185 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.531442 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.531826 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.585792 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/547e148c-16ac-498d-a6fc-1ef61b8d9501-kubelet-dir\") pod \"547e148c-16ac-498d-a6fc-1ef61b8d9501\" (UID: \"547e148c-16ac-498d-a6fc-1ef61b8d9501\") " Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.585840 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/547e148c-16ac-498d-a6fc-1ef61b8d9501-var-lock\") pod \"547e148c-16ac-498d-a6fc-1ef61b8d9501\" (UID: \"547e148c-16ac-498d-a6fc-1ef61b8d9501\") " Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.585886 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/547e148c-16ac-498d-a6fc-1ef61b8d9501-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "547e148c-16ac-498d-a6fc-1ef61b8d9501" (UID: "547e148c-16ac-498d-a6fc-1ef61b8d9501"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.585991 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/547e148c-16ac-498d-a6fc-1ef61b8d9501-kube-api-access\") pod \"547e148c-16ac-498d-a6fc-1ef61b8d9501\" (UID: \"547e148c-16ac-498d-a6fc-1ef61b8d9501\") " Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.586033 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/547e148c-16ac-498d-a6fc-1ef61b8d9501-var-lock" (OuterVolumeSpecName: "var-lock") pod "547e148c-16ac-498d-a6fc-1ef61b8d9501" (UID: "547e148c-16ac-498d-a6fc-1ef61b8d9501"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.586214 5050 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.586229 5050 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/547e148c-16ac-498d-a6fc-1ef61b8d9501-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.586239 5050 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/547e148c-16ac-498d-a6fc-1ef61b8d9501-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.586247 5050 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.586255 5050 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.594301 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547e148c-16ac-498d-a6fc-1ef61b8d9501-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "547e148c-16ac-498d-a6fc-1ef61b8d9501" (UID: "547e148c-16ac-498d-a6fc-1ef61b8d9501"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:25:14 crc kubenswrapper[5050]: I0131 05:25:14.687202 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/547e148c-16ac-498d-a6fc-1ef61b8d9501-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.338171 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.340082 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace" exitCode=0 Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.340282 5050 scope.go:117] "RemoveContainer" containerID="71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.340211 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.359516 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"547e148c-16ac-498d-a6fc-1ef61b8d9501","Type":"ContainerDied","Data":"0a555e22d6fa9db9e3be30e3df36df3511de84c4c49b04ec458cd9ceeca9005e"} Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.359559 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a555e22d6fa9db9e3be30e3df36df3511de84c4c49b04ec458cd9ceeca9005e" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.359655 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.362621 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.363107 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.363507 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.364047 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.365063 5050 scope.go:117] "RemoveContainer" containerID="242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.382757 5050 scope.go:117] "RemoveContainer" containerID="ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.392819 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.393589 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.393933 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.394289 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.403480 5050 scope.go:117] "RemoveContainer" containerID="e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.422108 5050 scope.go:117] "RemoveContainer" containerID="c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.445457 5050 scope.go:117] "RemoveContainer" containerID="1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.461821 5050 scope.go:117] "RemoveContainer" containerID="71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c" Jan 31 05:25:15 crc kubenswrapper[5050]: E0131 05:25:15.462290 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\": container with ID starting with 71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c not found: ID does not exist" containerID="71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.462379 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c"} err="failed to get container status \"71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\": rpc error: code = NotFound desc = could not find container \"71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c\": container with ID starting with 71612fc811b554b1328630fe0302c0ee342c1b2c315c50c09f27ff494146286c not found: ID does not exist" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.462460 5050 scope.go:117] "RemoveContainer" containerID="242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb" Jan 31 05:25:15 crc kubenswrapper[5050]: E0131 05:25:15.462805 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\": container with ID starting with 242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb not found: ID does not exist" containerID="242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.462931 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb"} err="failed to get container status \"242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\": rpc error: code = NotFound desc = could not find container \"242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb\": container with ID starting with 242e95f27e735371459b4e52b7d81804cd77f6fd7cd3bbc102097f3f6afceddb not found: ID does not exist" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.463021 5050 scope.go:117] "RemoveContainer" containerID="ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa" Jan 31 05:25:15 crc kubenswrapper[5050]: E0131 05:25:15.463263 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\": container with ID starting with ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa not found: ID does not exist" containerID="ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.463345 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa"} err="failed to get container status \"ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\": rpc error: code = NotFound desc = could not find container \"ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa\": container with ID starting with ca0e38d90f4024f98572f4bb2ce3c56bfb831e383e4cc98894e2ef736bcf78aa not found: ID does not exist" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.463495 5050 scope.go:117] "RemoveContainer" containerID="e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc" Jan 31 05:25:15 crc kubenswrapper[5050]: E0131 05:25:15.463704 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\": container with ID starting with e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc not found: ID does not exist" containerID="e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.463776 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc"} err="failed to get container status \"e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\": rpc error: code = NotFound desc = could not find container \"e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc\": container with ID starting with e30508e8e4e37222df09258e2a05a20bdf37abfbe106981a07212f96b0ae42cc not found: ID does not exist" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.463843 5050 scope.go:117] "RemoveContainer" containerID="c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace" Jan 31 05:25:15 crc kubenswrapper[5050]: E0131 05:25:15.464076 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\": container with ID starting with c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace not found: ID does not exist" containerID="c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.464146 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace"} err="failed to get container status \"c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\": rpc error: code = NotFound desc = could not find container \"c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace\": container with ID starting with c32f1682495aeaa276efa860d1fb4f2812f83f3b74316bb3e8473b07d9d15ace not found: ID does not exist" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.464208 5050 scope.go:117] "RemoveContainer" containerID="1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af" Jan 31 05:25:15 crc kubenswrapper[5050]: E0131 05:25:15.464402 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\": container with ID starting with 1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af not found: ID does not exist" containerID="1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.464471 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af"} err="failed to get container status \"1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\": rpc error: code = NotFound desc = could not find container \"1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af\": container with ID starting with 1371771e89538b4c78f515a1e71b8008a970ce897821f6f2e037a9028cc896af not found: ID does not exist" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.745557 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.746073 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.746650 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.747344 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:15 crc kubenswrapper[5050]: I0131 05:25:15.748023 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:16 crc kubenswrapper[5050]: E0131 05:25:16.580120 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:16 crc kubenswrapper[5050]: E0131 05:25:16.580777 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:16 crc kubenswrapper[5050]: E0131 05:25:16.581420 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:16 crc kubenswrapper[5050]: E0131 05:25:16.581857 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:16 crc kubenswrapper[5050]: E0131 05:25:16.582330 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:16 crc kubenswrapper[5050]: I0131 05:25:16.582384 5050 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 05:25:16 crc kubenswrapper[5050]: E0131 05:25:16.582813 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" interval="200ms" Jan 31 05:25:16 crc kubenswrapper[5050]: E0131 05:25:16.783650 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" interval="400ms" Jan 31 05:25:17 crc kubenswrapper[5050]: E0131 05:25:17.130987 5050 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.70:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:17 crc kubenswrapper[5050]: I0131 05:25:17.131535 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:17 crc kubenswrapper[5050]: W0131 05:25:17.164857 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-41fc99649cb051b318853dd652b14910c0523be3db89de3b6b0b432b7f18ded0 WatchSource:0}: Error finding container 41fc99649cb051b318853dd652b14910c0523be3db89de3b6b0b432b7f18ded0: Status 404 returned error can't find the container with id 41fc99649cb051b318853dd652b14910c0523be3db89de3b6b0b432b7f18ded0 Jan 31 05:25:17 crc kubenswrapper[5050]: E0131 05:25:17.171186 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.70:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fb97b273f8c5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 05:25:17.170486366 +0000 UTC m=+242.219647992,LastTimestamp:2026-01-31 05:25:17.170486366 +0000 UTC m=+242.219647992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 05:25:17 crc kubenswrapper[5050]: E0131 05:25:17.184918 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" interval="800ms" Jan 31 05:25:17 crc kubenswrapper[5050]: I0131 05:25:17.209519 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:25:17 crc kubenswrapper[5050]: I0131 05:25:17.212167 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:17 crc kubenswrapper[5050]: I0131 05:25:17.212422 5050 status_manager.go:851] "Failed to get status for pod" podUID="1bdc621b-09b4-43de-921b-be2322174c79" pod="openshift-marketplace/redhat-operators-qmfcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmfcw\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:17 crc kubenswrapper[5050]: I0131 05:25:17.212669 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:17 crc kubenswrapper[5050]: I0131 05:25:17.212989 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:17 crc kubenswrapper[5050]: I0131 05:25:17.275778 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:25:17 crc kubenswrapper[5050]: I0131 05:25:17.276255 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:17 crc kubenswrapper[5050]: I0131 05:25:17.276833 5050 status_manager.go:851] "Failed to get status for pod" podUID="1bdc621b-09b4-43de-921b-be2322174c79" pod="openshift-marketplace/redhat-operators-qmfcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmfcw\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:17 crc kubenswrapper[5050]: I0131 05:25:17.277507 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:17 crc kubenswrapper[5050]: I0131 05:25:17.277828 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:17 crc kubenswrapper[5050]: I0131 05:25:17.372744 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"41fc99649cb051b318853dd652b14910c0523be3db89de3b6b0b432b7f18ded0"} Jan 31 05:25:17 crc kubenswrapper[5050]: E0131 05:25:17.986318 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" interval="1.6s" Jan 31 05:25:18 crc kubenswrapper[5050]: E0131 05:25:18.381408 5050 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.70:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:18 crc kubenswrapper[5050]: I0131 05:25:18.381417 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:18 crc kubenswrapper[5050]: I0131 05:25:18.381767 5050 status_manager.go:851] "Failed to get status for pod" podUID="1bdc621b-09b4-43de-921b-be2322174c79" pod="openshift-marketplace/redhat-operators-qmfcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmfcw\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:18 crc kubenswrapper[5050]: I0131 05:25:18.382091 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"daab7aa563a518d273b23d3d06a5e965da069fec6544a976285fa65985732a48"} Jan 31 05:25:18 crc kubenswrapper[5050]: I0131 05:25:18.382144 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:18 crc kubenswrapper[5050]: I0131 05:25:18.382542 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:19 crc kubenswrapper[5050]: E0131 05:25:19.389438 5050 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.70:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:25:19 crc kubenswrapper[5050]: E0131 05:25:19.587777 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" interval="3.2s" Jan 31 05:25:20 crc kubenswrapper[5050]: E0131 05:25:20.221702 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.70:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fb97b273f8c5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 05:25:17.170486366 +0000 UTC m=+242.219647992,LastTimestamp:2026-01-31 05:25:17.170486366 +0000 UTC m=+242.219647992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 05:25:22 crc kubenswrapper[5050]: E0131 05:25:22.788645 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.70:6443: connect: connection refused" interval="6.4s" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.426585 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.426745 5050 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936" exitCode=1 Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.426788 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936"} Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.427485 5050 scope.go:117] "RemoveContainer" containerID="8114445f29751a32a566b360249dca7f3b1a736de6788aaad22e76a2113c2936" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.427835 5050 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.428316 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.428656 5050 status_manager.go:851] "Failed to get status for pod" podUID="1bdc621b-09b4-43de-921b-be2322174c79" pod="openshift-marketplace/redhat-operators-qmfcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmfcw\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.429098 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.429389 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.740562 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.740734 5050 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.741590 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.742105 5050 status_manager.go:851] "Failed to get status for pod" podUID="1bdc621b-09b4-43de-921b-be2322174c79" pod="openshift-marketplace/redhat-operators-qmfcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmfcw\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.742677 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.743179 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.744111 5050 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.744871 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.745376 5050 status_manager.go:851] "Failed to get status for pod" podUID="1bdc621b-09b4-43de-921b-be2322174c79" pod="openshift-marketplace/redhat-operators-qmfcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmfcw\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.745847 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.746421 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.761905 5050 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="81eb4b11-a1e6-48e9-9c95-c03d0642eaad" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.761946 5050 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="81eb4b11-a1e6-48e9-9c95-c03d0642eaad" Jan 31 05:25:25 crc kubenswrapper[5050]: E0131 05:25:25.762442 5050 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:25 crc kubenswrapper[5050]: I0131 05:25:25.763123 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:25 crc kubenswrapper[5050]: E0131 05:25:25.829471 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.70:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" volumeName="registry-storage" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.437078 5050 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="bad1ed4084fcf8ae6a2db53246c4c9db817ad7a70096db357fe8aa4c0e1b7fa8" exitCode=0 Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.437189 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"bad1ed4084fcf8ae6a2db53246c4c9db817ad7a70096db357fe8aa4c0e1b7fa8"} Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.437575 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eeabd25e0c131174abab223cb8bd176bbbc39f228bba3dee674672e08e2baa89"} Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.438057 5050 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="81eb4b11-a1e6-48e9-9c95-c03d0642eaad" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.438080 5050 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="81eb4b11-a1e6-48e9-9c95-c03d0642eaad" Jan 31 05:25:26 crc kubenswrapper[5050]: E0131 05:25:26.438693 5050 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.438691 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.439387 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.439910 5050 status_manager.go:851] "Failed to get status for pod" podUID="1bdc621b-09b4-43de-921b-be2322174c79" pod="openshift-marketplace/redhat-operators-qmfcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmfcw\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.440439 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.441252 5050 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.445463 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.445536 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5b3f142a8a3371027c8af719b964c2aa1fda97484324c39252c10c2282196393"} Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.446564 5050 status_manager.go:851] "Failed to get status for pod" podUID="1bdc621b-09b4-43de-921b-be2322174c79" pod="openshift-marketplace/redhat-operators-qmfcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmfcw\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.447050 5050 status_manager.go:851] "Failed to get status for pod" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" pod="openshift-marketplace/redhat-operators-9lbxv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9lbxv\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.447448 5050 status_manager.go:851] "Failed to get status for pod" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.447808 5050 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.448274 5050 status_manager.go:851] "Failed to get status for pod" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" pod="openshift-marketplace/community-operators-zdgsp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zdgsp\": dial tcp 38.102.83.70:6443: connect: connection refused" Jan 31 05:25:26 crc kubenswrapper[5050]: I0131 05:25:26.544942 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:25:27 crc kubenswrapper[5050]: I0131 05:25:27.454479 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"931b832b8dc32886d6728537e1a936fcbc7484c23a2a9e42dac940fedebd9401"} Jan 31 05:25:27 crc kubenswrapper[5050]: I0131 05:25:27.454542 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"51feca0251fa56ea2d175d45c8a92ac2c22f176d912c4633a88867db494593cc"} Jan 31 05:25:27 crc kubenswrapper[5050]: I0131 05:25:27.454554 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"74ad114d4a7882b57423abc1c159ac465489ca8c6025d50bf3d583f9ae1483c9"} Jan 31 05:25:28 crc kubenswrapper[5050]: I0131 05:25:28.461838 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c32778b7d6d46690c9d85f514433b7b858745e3c74df139810891a3e68b27de3"} Jan 31 05:25:28 crc kubenswrapper[5050]: I0131 05:25:28.462150 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6d18199f4140b0801306431b3d5034ce3c816170c012db766ca6f8db8bbe13a2"} Jan 31 05:25:28 crc kubenswrapper[5050]: I0131 05:25:28.462384 5050 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="81eb4b11-a1e6-48e9-9c95-c03d0642eaad" Jan 31 05:25:28 crc kubenswrapper[5050]: I0131 05:25:28.462397 5050 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="81eb4b11-a1e6-48e9-9c95-c03d0642eaad" Jan 31 05:25:28 crc kubenswrapper[5050]: I0131 05:25:28.462583 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:29 crc kubenswrapper[5050]: I0131 05:25:29.971365 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:25:29 crc kubenswrapper[5050]: I0131 05:25:29.971611 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 31 05:25:29 crc kubenswrapper[5050]: I0131 05:25:29.971930 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 31 05:25:30 crc kubenswrapper[5050]: I0131 05:25:30.763257 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:30 crc kubenswrapper[5050]: I0131 05:25:30.763342 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:30 crc kubenswrapper[5050]: I0131 05:25:30.773011 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.012522 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" podUID="f221629d-987d-49fe-bcaf-2708f516eec8" containerName="oauth-openshift" containerID="cri-o://fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da" gracePeriod=15 Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.460418 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.474305 5050 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.501044 5050 generic.go:334] "Generic (PLEG): container finished" podID="f221629d-987d-49fe-bcaf-2708f516eec8" containerID="fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da" exitCode=0 Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.501100 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.501531 5050 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="81eb4b11-a1e6-48e9-9c95-c03d0642eaad" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.501547 5050 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="81eb4b11-a1e6-48e9-9c95-c03d0642eaad" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.501117 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" event={"ID":"f221629d-987d-49fe-bcaf-2708f516eec8","Type":"ContainerDied","Data":"fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da"} Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.501645 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ln492" event={"ID":"f221629d-987d-49fe-bcaf-2708f516eec8","Type":"ContainerDied","Data":"620fadb6f1b6928c218085b38b55a357a568d302316cfcdb44cb55867adab02e"} Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.501702 5050 scope.go:117] "RemoveContainer" containerID="fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.512395 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.518033 5050 scope.go:117] "RemoveContainer" containerID="fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da" Jan 31 05:25:33 crc kubenswrapper[5050]: E0131 05:25:33.518780 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da\": container with ID starting with fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da not found: ID does not exist" containerID="fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.518824 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da"} err="failed to get container status \"fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da\": rpc error: code = NotFound desc = could not find container \"fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da\": container with ID starting with fee46a14325d759fb2f363dee7e8d0930566d5bc5494c308762085e306ea49da not found: ID does not exist" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.630678 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-session\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.630924 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-provider-selection\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631023 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-trusted-ca-bundle\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631140 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-error\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631217 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-audit-policies\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631315 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f221629d-987d-49fe-bcaf-2708f516eec8-audit-dir\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631414 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfclt\" (UniqueName: \"kubernetes.io/projected/f221629d-987d-49fe-bcaf-2708f516eec8-kube-api-access-dfclt\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631492 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-router-certs\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631559 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-ocp-branding-template\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631639 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-cliconfig\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631705 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-service-ca\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631791 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-idp-0-file-data\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631860 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-serving-cert\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631926 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-login\") pod \"f221629d-987d-49fe-bcaf-2708f516eec8\" (UID: \"f221629d-987d-49fe-bcaf-2708f516eec8\") " Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631347 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f221629d-987d-49fe-bcaf-2708f516eec8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.631860 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.632130 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.632174 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.632346 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.632365 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.632377 5050 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.632394 5050 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f221629d-987d-49fe-bcaf-2708f516eec8-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.632404 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.636559 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.637562 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f221629d-987d-49fe-bcaf-2708f516eec8-kube-api-access-dfclt" (OuterVolumeSpecName: "kube-api-access-dfclt") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "kube-api-access-dfclt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.637684 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.638877 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.641548 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.641609 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.650309 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.650620 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.651390 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f221629d-987d-49fe-bcaf-2708f516eec8" (UID: "f221629d-987d-49fe-bcaf-2708f516eec8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.733199 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.733251 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.733270 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.733289 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.733308 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.733327 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.733346 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.733362 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfclt\" (UniqueName: \"kubernetes.io/projected/f221629d-987d-49fe-bcaf-2708f516eec8-kube-api-access-dfclt\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.733377 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: I0131 05:25:33.733393 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f221629d-987d-49fe-bcaf-2708f516eec8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 05:25:33 crc kubenswrapper[5050]: E0131 05:25:33.972029 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 31 05:25:34 crc kubenswrapper[5050]: I0131 05:25:34.510009 5050 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="81eb4b11-a1e6-48e9-9c95-c03d0642eaad" Jan 31 05:25:34 crc kubenswrapper[5050]: I0131 05:25:34.511849 5050 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="81eb4b11-a1e6-48e9-9c95-c03d0642eaad" Jan 31 05:25:35 crc kubenswrapper[5050]: I0131 05:25:35.751522 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5e14c0ee-a5fd-4459-a044-a78f43ff7d3b" Jan 31 05:25:39 crc kubenswrapper[5050]: I0131 05:25:39.978114 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:25:39 crc kubenswrapper[5050]: I0131 05:25:39.987250 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 05:25:43 crc kubenswrapper[5050]: I0131 05:25:43.092574 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 05:25:44 crc kubenswrapper[5050]: I0131 05:25:44.371559 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 05:25:44 crc kubenswrapper[5050]: I0131 05:25:44.518635 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 05:25:44 crc kubenswrapper[5050]: I0131 05:25:44.641328 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 05:25:44 crc kubenswrapper[5050]: I0131 05:25:44.795027 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 05:25:44 crc kubenswrapper[5050]: I0131 05:25:44.861942 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 05:25:44 crc kubenswrapper[5050]: I0131 05:25:44.865879 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 05:25:45 crc kubenswrapper[5050]: I0131 05:25:45.467139 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 05:25:45 crc kubenswrapper[5050]: I0131 05:25:45.510006 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 05:25:45 crc kubenswrapper[5050]: I0131 05:25:45.595520 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 05:25:45 crc kubenswrapper[5050]: I0131 05:25:45.676157 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 05:25:45 crc kubenswrapper[5050]: I0131 05:25:45.822673 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 05:25:45 crc kubenswrapper[5050]: I0131 05:25:45.855924 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.184472 5050 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.259771 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.326249 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.392673 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.404272 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.476089 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.598199 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.620850 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.640389 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.698035 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.704812 5050 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.770698 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.800225 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 05:25:46 crc kubenswrapper[5050]: I0131 05:25:46.996317 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.032248 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.101397 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.189833 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.230226 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.323303 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.336836 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.348864 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.376108 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.497678 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.626832 5050 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.661515 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.677645 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.752497 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.777822 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.844148 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.899825 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 05:25:47 crc kubenswrapper[5050]: I0131 05:25:47.974124 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.122433 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.126236 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.133055 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.227860 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.231617 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.258324 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.358369 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.568668 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.590277 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.619570 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.687644 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.782779 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.839042 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.890108 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.898526 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.919688 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.965773 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 05:25:48 crc kubenswrapper[5050]: I0131 05:25:48.984363 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.009735 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.076022 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.267153 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.290607 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.305211 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.310562 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.319504 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.324912 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.394808 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.541785 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.570661 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.622019 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.675712 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.697504 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.721898 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.783761 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.834742 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.842467 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.907094 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 05:25:49 crc kubenswrapper[5050]: I0131 05:25:49.985645 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.024686 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.035223 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.125187 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.201988 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.226365 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.241536 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.275185 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.296455 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.308440 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.407158 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.413317 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.518008 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.544974 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.546097 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.555649 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.613727 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 05:25:50 crc kubenswrapper[5050]: I0131 05:25:50.939193 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.042858 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.118365 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.142491 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.146685 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.181396 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.262094 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.345055 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.349668 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.431636 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.545534 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.546309 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.719139 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.810080 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.852210 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 05:25:51 crc kubenswrapper[5050]: I0131 05:25:51.875727 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.054020 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.054940 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.084991 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.178520 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.189542 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.207434 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.248278 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.257000 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.266983 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.340175 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.371677 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.418756 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.535905 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.779678 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.814120 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.949241 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.953161 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.965769 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.978911 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 05:25:52 crc kubenswrapper[5050]: I0131 05:25:52.994074 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.010914 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.023777 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.042577 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.239918 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.320701 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.341583 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.359449 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.412064 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.455829 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.563854 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.578529 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.682008 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.745466 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.808011 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.815781 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.897582 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.921699 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 05:25:53 crc kubenswrapper[5050]: I0131 05:25:53.946107 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.002763 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.007077 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.078362 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.099267 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.108099 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.111694 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.120557 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.129869 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.196368 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.246119 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.258047 5050 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.301332 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.339057 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.401442 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.487225 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.582200 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.582296 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.619285 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.705939 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.724742 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.744642 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.797375 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.811021 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.874477 5050 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.883281 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9lbxv","openshift-authentication/oauth-openshift-558db77b4-ln492","openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.883412 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.890277 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.912204 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.912178135 podStartE2EDuration="21.912178135s" podCreationTimestamp="2026-01-31 05:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:25:54.909083899 +0000 UTC m=+279.958245555" watchObservedRunningTime="2026-01-31 05:25:54.912178135 +0000 UTC m=+279.961339771" Jan 31 05:25:54 crc kubenswrapper[5050]: I0131 05:25:54.954444 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.034710 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.055755 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.098307 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.270683 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.285890 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.576414 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.683812 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.731479 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.742736 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.747140 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" path="/var/lib/kubelet/pods/90f89cbe-5e0c-4fdd-ae5f-fdb706620c72/volumes" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.747975 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f221629d-987d-49fe-bcaf-2708f516eec8" path="/var/lib/kubelet/pods/f221629d-987d-49fe-bcaf-2708f516eec8/volumes" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.774398 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.784305 5050 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.841622 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 05:25:55 crc kubenswrapper[5050]: I0131 05:25:55.970666 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.021314 5050 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.021569 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://daab7aa563a518d273b23d3d06a5e965da069fec6544a976285fa65985732a48" gracePeriod=5 Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.118638 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.133825 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.165849 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.183020 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.184390 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.267494 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.275280 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.306854 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.367232 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.410220 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.446405 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.473466 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.534900 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.601377 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.612097 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.627996 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.658904 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.800275 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.904839 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 05:25:56 crc kubenswrapper[5050]: I0131 05:25:56.987174 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.168883 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.200155 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.305521 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.424901 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.555522 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.590031 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.606504 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.617709 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.693507 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.707468 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.707480 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.791882 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 05:25:57 crc kubenswrapper[5050]: I0131 05:25:57.852018 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.080852 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.106197 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.244491 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.462647 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.532356 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.589684 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.611741 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.677682 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-68b95d957b-kkq9n"] Jan 31 05:25:58 crc kubenswrapper[5050]: E0131 05:25:58.678094 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" containerName="extract-utilities" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.678125 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" containerName="extract-utilities" Jan 31 05:25:58 crc kubenswrapper[5050]: E0131 05:25:58.678172 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" containerName="installer" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.678191 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" containerName="installer" Jan 31 05:25:58 crc kubenswrapper[5050]: E0131 05:25:58.678221 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" containerName="extract-content" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.678242 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" containerName="extract-content" Jan 31 05:25:58 crc kubenswrapper[5050]: E0131 05:25:58.678268 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.678285 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 05:25:58 crc kubenswrapper[5050]: E0131 05:25:58.678307 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f221629d-987d-49fe-bcaf-2708f516eec8" containerName="oauth-openshift" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.678325 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f221629d-987d-49fe-bcaf-2708f516eec8" containerName="oauth-openshift" Jan 31 05:25:58 crc kubenswrapper[5050]: E0131 05:25:58.678349 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" containerName="registry-server" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.678367 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" containerName="registry-server" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.678608 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.678650 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="547e148c-16ac-498d-a6fc-1ef61b8d9501" containerName="installer" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.678677 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f89cbe-5e0c-4fdd-ae5f-fdb706620c72" containerName="registry-server" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.678703 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f221629d-987d-49fe-bcaf-2708f516eec8" containerName="oauth-openshift" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.697688 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.701696 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.704110 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.704394 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.704421 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.705675 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68b95d957b-kkq9n"] Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.707738 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708578 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-session\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708621 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708658 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-user-template-error\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708677 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708699 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-user-template-login\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708732 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708755 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-audit-dir\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708786 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-audit-policies\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708805 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708825 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708873 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-service-ca\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708913 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-router-certs\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708973 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.708999 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj74c\" (UniqueName: \"kubernetes.io/projected/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-kube-api-access-dj74c\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.710123 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.710886 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.711215 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.711458 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.713002 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.713070 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.713432 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.727284 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.732222 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.737399 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.761870 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810530 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-audit-policies\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810564 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810581 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810610 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-service-ca\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810632 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-router-certs\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810653 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810669 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj74c\" (UniqueName: \"kubernetes.io/projected/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-kube-api-access-dj74c\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810692 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-session\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810709 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810729 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-user-template-error\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810743 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810759 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-user-template-login\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810776 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810790 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-audit-dir\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.810847 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-audit-dir\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.811378 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-audit-policies\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.812537 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.813899 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.814479 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-service-ca\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.816887 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.818644 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-session\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.820019 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.820512 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-router-certs\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.821319 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-user-template-login\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.826326 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.830175 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-user-template-error\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.832299 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:58 crc kubenswrapper[5050]: I0131 05:25:58.834819 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj74c\" (UniqueName: \"kubernetes.io/projected/b450e4e4-bd14-48e6-9ecf-0df67fe2f08c-kube-api-access-dj74c\") pod \"oauth-openshift-68b95d957b-kkq9n\" (UID: \"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c\") " pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:59 crc kubenswrapper[5050]: I0131 05:25:59.027050 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:59 crc kubenswrapper[5050]: I0131 05:25:59.282686 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 05:25:59 crc kubenswrapper[5050]: I0131 05:25:59.312856 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68b95d957b-kkq9n"] Jan 31 05:25:59 crc kubenswrapper[5050]: I0131 05:25:59.329023 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 05:25:59 crc kubenswrapper[5050]: I0131 05:25:59.675187 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 05:25:59 crc kubenswrapper[5050]: I0131 05:25:59.712630 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" event={"ID":"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c","Type":"ContainerStarted","Data":"4be72b853369c847e56ad6930c77f7aace06f0fcce070727c09f274ef26b37aa"} Jan 31 05:25:59 crc kubenswrapper[5050]: I0131 05:25:59.712680 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" event={"ID":"b450e4e4-bd14-48e6-9ecf-0df67fe2f08c","Type":"ContainerStarted","Data":"f4634c953da2427df4b61d022a2dda3508c168e6b30fcbaa77417091574f988e"} Jan 31 05:25:59 crc kubenswrapper[5050]: I0131 05:25:59.713478 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:25:59 crc kubenswrapper[5050]: I0131 05:25:59.749658 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" podStartSLOduration=51.749633262 podStartE2EDuration="51.749633262s" podCreationTimestamp="2026-01-31 05:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:25:59.746862275 +0000 UTC m=+284.796023882" watchObservedRunningTime="2026-01-31 05:25:59.749633262 +0000 UTC m=+284.798794898" Jan 31 05:25:59 crc kubenswrapper[5050]: I0131 05:25:59.753848 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 05:26:00 crc kubenswrapper[5050]: I0131 05:26:00.035998 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 05:26:00 crc kubenswrapper[5050]: I0131 05:26:00.213682 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-68b95d957b-kkq9n" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.185377 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.625024 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.625125 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.655457 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.655535 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.655565 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.655666 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.655706 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.655772 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.656071 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.656122 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.656175 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.656128 5050 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.666896 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.726669 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.726732 5050 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="daab7aa563a518d273b23d3d06a5e965da069fec6544a976285fa65985732a48" exitCode=137 Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.726814 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.726852 5050 scope.go:117] "RemoveContainer" containerID="daab7aa563a518d273b23d3d06a5e965da069fec6544a976285fa65985732a48" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.751989 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.758251 5050 scope.go:117] "RemoveContainer" containerID="daab7aa563a518d273b23d3d06a5e965da069fec6544a976285fa65985732a48" Jan 31 05:26:01 crc kubenswrapper[5050]: E0131 05:26:01.758970 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daab7aa563a518d273b23d3d06a5e965da069fec6544a976285fa65985732a48\": container with ID starting with daab7aa563a518d273b23d3d06a5e965da069fec6544a976285fa65985732a48 not found: ID does not exist" containerID="daab7aa563a518d273b23d3d06a5e965da069fec6544a976285fa65985732a48" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.759017 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daab7aa563a518d273b23d3d06a5e965da069fec6544a976285fa65985732a48"} err="failed to get container status \"daab7aa563a518d273b23d3d06a5e965da069fec6544a976285fa65985732a48\": rpc error: code = NotFound desc = could not find container \"daab7aa563a518d273b23d3d06a5e965da069fec6544a976285fa65985732a48\": container with ID starting with daab7aa563a518d273b23d3d06a5e965da069fec6544a976285fa65985732a48 not found: ID does not exist" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.760216 5050 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.760267 5050 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.760288 5050 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:01 crc kubenswrapper[5050]: I0131 05:26:01.760309 5050 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:14 crc kubenswrapper[5050]: I0131 05:26:14.804377 5050 generic.go:334] "Generic (PLEG): container finished" podID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" containerID="c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03" exitCode=0 Jan 31 05:26:14 crc kubenswrapper[5050]: I0131 05:26:14.804511 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" event={"ID":"a8c36ad8-2c55-41d9-8bcc-8accc3501626","Type":"ContainerDied","Data":"c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03"} Jan 31 05:26:14 crc kubenswrapper[5050]: I0131 05:26:14.805793 5050 scope.go:117] "RemoveContainer" containerID="c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03" Jan 31 05:26:15 crc kubenswrapper[5050]: I0131 05:26:15.512365 5050 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 05:26:15 crc kubenswrapper[5050]: I0131 05:26:15.824343 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" event={"ID":"a8c36ad8-2c55-41d9-8bcc-8accc3501626","Type":"ContainerStarted","Data":"cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e"} Jan 31 05:26:15 crc kubenswrapper[5050]: I0131 05:26:15.826260 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:26:15 crc kubenswrapper[5050]: I0131 05:26:15.832899 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:26:30 crc kubenswrapper[5050]: I0131 05:26:30.668722 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ck76z"] Jan 31 05:26:30 crc kubenswrapper[5050]: I0131 05:26:30.669354 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" podUID="d152ed50-3f92-49c8-80cc-e73e4046259e" containerName="controller-manager" containerID="cri-o://f09019a2bf0d8455f6ff986bbc366a72f0cde16690da6330ba6d96369f3d41f7" gracePeriod=30 Jan 31 05:26:30 crc kubenswrapper[5050]: I0131 05:26:30.765514 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj"] Jan 31 05:26:30 crc kubenswrapper[5050]: I0131 05:26:30.765768 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" podUID="85a5692d-28e5-45cd-85db-ba1dcef92b58" containerName="route-controller-manager" containerID="cri-o://64bc90f6655715b22af6501a5bba507011f7607eee52abbfff6560aab1c49400" gracePeriod=30 Jan 31 05:26:30 crc kubenswrapper[5050]: I0131 05:26:30.939110 5050 generic.go:334] "Generic (PLEG): container finished" podID="85a5692d-28e5-45cd-85db-ba1dcef92b58" containerID="64bc90f6655715b22af6501a5bba507011f7607eee52abbfff6560aab1c49400" exitCode=0 Jan 31 05:26:30 crc kubenswrapper[5050]: I0131 05:26:30.939180 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" event={"ID":"85a5692d-28e5-45cd-85db-ba1dcef92b58","Type":"ContainerDied","Data":"64bc90f6655715b22af6501a5bba507011f7607eee52abbfff6560aab1c49400"} Jan 31 05:26:30 crc kubenswrapper[5050]: I0131 05:26:30.943247 5050 generic.go:334] "Generic (PLEG): container finished" podID="d152ed50-3f92-49c8-80cc-e73e4046259e" containerID="f09019a2bf0d8455f6ff986bbc366a72f0cde16690da6330ba6d96369f3d41f7" exitCode=0 Jan 31 05:26:30 crc kubenswrapper[5050]: I0131 05:26:30.943319 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" event={"ID":"d152ed50-3f92-49c8-80cc-e73e4046259e","Type":"ContainerDied","Data":"f09019a2bf0d8455f6ff986bbc366a72f0cde16690da6330ba6d96369f3d41f7"} Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.111481 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.177111 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.275271 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t5f6\" (UniqueName: \"kubernetes.io/projected/d152ed50-3f92-49c8-80cc-e73e4046259e-kube-api-access-9t5f6\") pod \"d152ed50-3f92-49c8-80cc-e73e4046259e\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.275535 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d152ed50-3f92-49c8-80cc-e73e4046259e-serving-cert\") pod \"d152ed50-3f92-49c8-80cc-e73e4046259e\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.275632 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a5692d-28e5-45cd-85db-ba1dcef92b58-serving-cert\") pod \"85a5692d-28e5-45cd-85db-ba1dcef92b58\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.275731 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-client-ca\") pod \"d152ed50-3f92-49c8-80cc-e73e4046259e\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.275803 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-config\") pod \"d152ed50-3f92-49c8-80cc-e73e4046259e\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.275905 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-proxy-ca-bundles\") pod \"d152ed50-3f92-49c8-80cc-e73e4046259e\" (UID: \"d152ed50-3f92-49c8-80cc-e73e4046259e\") " Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.276227 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85a5692d-28e5-45cd-85db-ba1dcef92b58-client-ca\") pod \"85a5692d-28e5-45cd-85db-ba1dcef92b58\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.276367 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tmll\" (UniqueName: \"kubernetes.io/projected/85a5692d-28e5-45cd-85db-ba1dcef92b58-kube-api-access-2tmll\") pod \"85a5692d-28e5-45cd-85db-ba1dcef92b58\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.276482 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a5692d-28e5-45cd-85db-ba1dcef92b58-config\") pod \"85a5692d-28e5-45cd-85db-ba1dcef92b58\" (UID: \"85a5692d-28e5-45cd-85db-ba1dcef92b58\") " Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.276598 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-client-ca" (OuterVolumeSpecName: "client-ca") pod "d152ed50-3f92-49c8-80cc-e73e4046259e" (UID: "d152ed50-3f92-49c8-80cc-e73e4046259e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.276739 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-config" (OuterVolumeSpecName: "config") pod "d152ed50-3f92-49c8-80cc-e73e4046259e" (UID: "d152ed50-3f92-49c8-80cc-e73e4046259e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.276901 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.276990 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.277094 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85a5692d-28e5-45cd-85db-ba1dcef92b58-client-ca" (OuterVolumeSpecName: "client-ca") pod "85a5692d-28e5-45cd-85db-ba1dcef92b58" (UID: "85a5692d-28e5-45cd-85db-ba1dcef92b58"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.277448 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d152ed50-3f92-49c8-80cc-e73e4046259e" (UID: "d152ed50-3f92-49c8-80cc-e73e4046259e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.277560 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85a5692d-28e5-45cd-85db-ba1dcef92b58-config" (OuterVolumeSpecName: "config") pod "85a5692d-28e5-45cd-85db-ba1dcef92b58" (UID: "85a5692d-28e5-45cd-85db-ba1dcef92b58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.281147 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a5692d-28e5-45cd-85db-ba1dcef92b58-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85a5692d-28e5-45cd-85db-ba1dcef92b58" (UID: "85a5692d-28e5-45cd-85db-ba1dcef92b58"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.281922 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d152ed50-3f92-49c8-80cc-e73e4046259e-kube-api-access-9t5f6" (OuterVolumeSpecName: "kube-api-access-9t5f6") pod "d152ed50-3f92-49c8-80cc-e73e4046259e" (UID: "d152ed50-3f92-49c8-80cc-e73e4046259e"). InnerVolumeSpecName "kube-api-access-9t5f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.282235 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d152ed50-3f92-49c8-80cc-e73e4046259e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d152ed50-3f92-49c8-80cc-e73e4046259e" (UID: "d152ed50-3f92-49c8-80cc-e73e4046259e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.284250 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a5692d-28e5-45cd-85db-ba1dcef92b58-kube-api-access-2tmll" (OuterVolumeSpecName: "kube-api-access-2tmll") pod "85a5692d-28e5-45cd-85db-ba1dcef92b58" (UID: "85a5692d-28e5-45cd-85db-ba1dcef92b58"). InnerVolumeSpecName "kube-api-access-2tmll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.377380 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tmll\" (UniqueName: \"kubernetes.io/projected/85a5692d-28e5-45cd-85db-ba1dcef92b58-kube-api-access-2tmll\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.377418 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a5692d-28e5-45cd-85db-ba1dcef92b58-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.377432 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t5f6\" (UniqueName: \"kubernetes.io/projected/d152ed50-3f92-49c8-80cc-e73e4046259e-kube-api-access-9t5f6\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.377445 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d152ed50-3f92-49c8-80cc-e73e4046259e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.377457 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a5692d-28e5-45cd-85db-ba1dcef92b58-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.377469 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d152ed50-3f92-49c8-80cc-e73e4046259e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.377480 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85a5692d-28e5-45cd-85db-ba1dcef92b58-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.950269 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.950248 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj" event={"ID":"85a5692d-28e5-45cd-85db-ba1dcef92b58","Type":"ContainerDied","Data":"6816dad11c2c9fee4a803df74019368289cc823a2f199ee2ccc424dce2bd0606"} Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.950466 5050 scope.go:117] "RemoveContainer" containerID="64bc90f6655715b22af6501a5bba507011f7607eee52abbfff6560aab1c49400" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.951919 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" event={"ID":"d152ed50-3f92-49c8-80cc-e73e4046259e","Type":"ContainerDied","Data":"1b6879e61c747b22a0388cdbaba0599315fbec08aa873d3025d8e1f844d00098"} Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.952006 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ck76z" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.977363 5050 scope.go:117] "RemoveContainer" containerID="f09019a2bf0d8455f6ff986bbc366a72f0cde16690da6330ba6d96369f3d41f7" Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.983643 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj"] Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.990266 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vszrj"] Jan 31 05:26:31 crc kubenswrapper[5050]: I0131 05:26:31.996482 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ck76z"] Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.001395 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ck76z"] Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.180352 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t"] Jan 31 05:26:32 crc kubenswrapper[5050]: E0131 05:26:32.180670 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a5692d-28e5-45cd-85db-ba1dcef92b58" containerName="route-controller-manager" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.180685 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a5692d-28e5-45cd-85db-ba1dcef92b58" containerName="route-controller-manager" Jan 31 05:26:32 crc kubenswrapper[5050]: E0131 05:26:32.180703 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d152ed50-3f92-49c8-80cc-e73e4046259e" containerName="controller-manager" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.180712 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d152ed50-3f92-49c8-80cc-e73e4046259e" containerName="controller-manager" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.180829 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="85a5692d-28e5-45cd-85db-ba1dcef92b58" containerName="route-controller-manager" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.180848 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d152ed50-3f92-49c8-80cc-e73e4046259e" containerName="controller-manager" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.181322 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.182897 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.183766 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.184174 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.184385 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.185372 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.186457 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7974fc8c86-sfx8c"] Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.187065 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.187274 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.188606 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.188689 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.190113 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.190486 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.190507 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.191115 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.191329 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02ebbe65-7251-4d78-987e-c87d272e2c39-client-ca\") pod \"route-controller-manager-646c6488b5-5lp7t\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.191387 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-proxy-ca-bundles\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.191434 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-client-ca\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.191464 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02ebbe65-7251-4d78-987e-c87d272e2c39-config\") pod \"route-controller-manager-646c6488b5-5lp7t\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.191486 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdtfs\" (UniqueName: \"kubernetes.io/projected/02ebbe65-7251-4d78-987e-c87d272e2c39-kube-api-access-zdtfs\") pod \"route-controller-manager-646c6488b5-5lp7t\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.191512 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w77kz\" (UniqueName: \"kubernetes.io/projected/47637a29-4c9c-4125-8c82-0047356f3a29-kube-api-access-w77kz\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.191536 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-config\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.191590 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02ebbe65-7251-4d78-987e-c87d272e2c39-serving-cert\") pod \"route-controller-manager-646c6488b5-5lp7t\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.191630 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47637a29-4c9c-4125-8c82-0047356f3a29-serving-cert\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.199190 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.204798 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t"] Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.219260 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7974fc8c86-sfx8c"] Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.293741 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02ebbe65-7251-4d78-987e-c87d272e2c39-client-ca\") pod \"route-controller-manager-646c6488b5-5lp7t\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.293843 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-proxy-ca-bundles\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.293892 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-client-ca\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.293936 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02ebbe65-7251-4d78-987e-c87d272e2c39-config\") pod \"route-controller-manager-646c6488b5-5lp7t\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.293998 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdtfs\" (UniqueName: \"kubernetes.io/projected/02ebbe65-7251-4d78-987e-c87d272e2c39-kube-api-access-zdtfs\") pod \"route-controller-manager-646c6488b5-5lp7t\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.294032 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w77kz\" (UniqueName: \"kubernetes.io/projected/47637a29-4c9c-4125-8c82-0047356f3a29-kube-api-access-w77kz\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.294105 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-config\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.294142 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02ebbe65-7251-4d78-987e-c87d272e2c39-serving-cert\") pod \"route-controller-manager-646c6488b5-5lp7t\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.294173 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47637a29-4c9c-4125-8c82-0047356f3a29-serving-cert\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.298073 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-client-ca\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.298907 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02ebbe65-7251-4d78-987e-c87d272e2c39-config\") pod \"route-controller-manager-646c6488b5-5lp7t\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.299867 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-proxy-ca-bundles\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.300160 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-config\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.302137 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02ebbe65-7251-4d78-987e-c87d272e2c39-client-ca\") pod \"route-controller-manager-646c6488b5-5lp7t\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.305534 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02ebbe65-7251-4d78-987e-c87d272e2c39-serving-cert\") pod \"route-controller-manager-646c6488b5-5lp7t\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.307427 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47637a29-4c9c-4125-8c82-0047356f3a29-serving-cert\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.326560 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdtfs\" (UniqueName: \"kubernetes.io/projected/02ebbe65-7251-4d78-987e-c87d272e2c39-kube-api-access-zdtfs\") pod \"route-controller-manager-646c6488b5-5lp7t\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.330920 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w77kz\" (UniqueName: \"kubernetes.io/projected/47637a29-4c9c-4125-8c82-0047356f3a29-kube-api-access-w77kz\") pod \"controller-manager-7974fc8c86-sfx8c\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.510678 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.523528 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.792501 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7974fc8c86-sfx8c"] Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.836607 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t"] Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.968850 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" event={"ID":"02ebbe65-7251-4d78-987e-c87d272e2c39","Type":"ContainerStarted","Data":"d8cf1e92637211eca1a08cb03c7a5e12d2aef66c6726fa061922d3c43be8f5e6"} Jan 31 05:26:32 crc kubenswrapper[5050]: I0131 05:26:32.972302 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" event={"ID":"47637a29-4c9c-4125-8c82-0047356f3a29","Type":"ContainerStarted","Data":"ec343ed1a502ac2f6b86638399cb865c8aed93a3fbce5f038bb4ed52890206f5"} Jan 31 05:26:33 crc kubenswrapper[5050]: I0131 05:26:33.746297 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85a5692d-28e5-45cd-85db-ba1dcef92b58" path="/var/lib/kubelet/pods/85a5692d-28e5-45cd-85db-ba1dcef92b58/volumes" Jan 31 05:26:33 crc kubenswrapper[5050]: I0131 05:26:33.747996 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d152ed50-3f92-49c8-80cc-e73e4046259e" path="/var/lib/kubelet/pods/d152ed50-3f92-49c8-80cc-e73e4046259e/volumes" Jan 31 05:26:34 crc kubenswrapper[5050]: I0131 05:26:34.011394 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" event={"ID":"02ebbe65-7251-4d78-987e-c87d272e2c39","Type":"ContainerStarted","Data":"5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978"} Jan 31 05:26:34 crc kubenswrapper[5050]: I0131 05:26:34.013721 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:34 crc kubenswrapper[5050]: I0131 05:26:34.016582 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" event={"ID":"47637a29-4c9c-4125-8c82-0047356f3a29","Type":"ContainerStarted","Data":"d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c"} Jan 31 05:26:34 crc kubenswrapper[5050]: I0131 05:26:34.016996 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:34 crc kubenswrapper[5050]: I0131 05:26:34.025159 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:34 crc kubenswrapper[5050]: I0131 05:26:34.029799 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:34 crc kubenswrapper[5050]: I0131 05:26:34.044687 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" podStartSLOduration=4.044668405 podStartE2EDuration="4.044668405s" podCreationTimestamp="2026-01-31 05:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:26:34.042159866 +0000 UTC m=+319.091321472" watchObservedRunningTime="2026-01-31 05:26:34.044668405 +0000 UTC m=+319.093830011" Jan 31 05:26:34 crc kubenswrapper[5050]: I0131 05:26:34.066007 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" podStartSLOduration=4.065981222 podStartE2EDuration="4.065981222s" podCreationTimestamp="2026-01-31 05:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:26:34.064412533 +0000 UTC m=+319.113574139" watchObservedRunningTime="2026-01-31 05:26:34.065981222 +0000 UTC m=+319.115142828" Jan 31 05:26:34 crc kubenswrapper[5050]: I0131 05:26:34.124145 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7974fc8c86-sfx8c"] Jan 31 05:26:34 crc kubenswrapper[5050]: I0131 05:26:34.147177 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t"] Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.030272 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" podUID="47637a29-4c9c-4125-8c82-0047356f3a29" containerName="controller-manager" containerID="cri-o://d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c" gracePeriod=30 Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.030529 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" podUID="02ebbe65-7251-4d78-987e-c87d272e2c39" containerName="route-controller-manager" containerID="cri-o://5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978" gracePeriod=30 Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.517242 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.524851 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.563663 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn"] Jan 31 05:26:36 crc kubenswrapper[5050]: E0131 05:26:36.565131 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47637a29-4c9c-4125-8c82-0047356f3a29" containerName="controller-manager" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.565171 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="47637a29-4c9c-4125-8c82-0047356f3a29" containerName="controller-manager" Jan 31 05:26:36 crc kubenswrapper[5050]: E0131 05:26:36.565193 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ebbe65-7251-4d78-987e-c87d272e2c39" containerName="route-controller-manager" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.565209 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ebbe65-7251-4d78-987e-c87d272e2c39" containerName="route-controller-manager" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.565407 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ebbe65-7251-4d78-987e-c87d272e2c39" containerName="route-controller-manager" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.565433 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="47637a29-4c9c-4125-8c82-0047356f3a29" containerName="controller-manager" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.566071 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.588462 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn"] Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.657541 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdtfs\" (UniqueName: \"kubernetes.io/projected/02ebbe65-7251-4d78-987e-c87d272e2c39-kube-api-access-zdtfs\") pod \"02ebbe65-7251-4d78-987e-c87d272e2c39\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.657642 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-proxy-ca-bundles\") pod \"47637a29-4c9c-4125-8c82-0047356f3a29\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.657685 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02ebbe65-7251-4d78-987e-c87d272e2c39-config\") pod \"02ebbe65-7251-4d78-987e-c87d272e2c39\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.657734 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-config\") pod \"47637a29-4c9c-4125-8c82-0047356f3a29\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.657781 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02ebbe65-7251-4d78-987e-c87d272e2c39-client-ca\") pod \"02ebbe65-7251-4d78-987e-c87d272e2c39\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.657851 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w77kz\" (UniqueName: \"kubernetes.io/projected/47637a29-4c9c-4125-8c82-0047356f3a29-kube-api-access-w77kz\") pod \"47637a29-4c9c-4125-8c82-0047356f3a29\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.657923 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47637a29-4c9c-4125-8c82-0047356f3a29-serving-cert\") pod \"47637a29-4c9c-4125-8c82-0047356f3a29\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.658011 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02ebbe65-7251-4d78-987e-c87d272e2c39-serving-cert\") pod \"02ebbe65-7251-4d78-987e-c87d272e2c39\" (UID: \"02ebbe65-7251-4d78-987e-c87d272e2c39\") " Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.658085 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-client-ca\") pod \"47637a29-4c9c-4125-8c82-0047356f3a29\" (UID: \"47637a29-4c9c-4125-8c82-0047356f3a29\") " Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.659005 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-client-ca" (OuterVolumeSpecName: "client-ca") pod "47637a29-4c9c-4125-8c82-0047356f3a29" (UID: "47637a29-4c9c-4125-8c82-0047356f3a29"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.659549 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "47637a29-4c9c-4125-8c82-0047356f3a29" (UID: "47637a29-4c9c-4125-8c82-0047356f3a29"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.659588 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02ebbe65-7251-4d78-987e-c87d272e2c39-client-ca" (OuterVolumeSpecName: "client-ca") pod "02ebbe65-7251-4d78-987e-c87d272e2c39" (UID: "02ebbe65-7251-4d78-987e-c87d272e2c39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.660439 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02ebbe65-7251-4d78-987e-c87d272e2c39-config" (OuterVolumeSpecName: "config") pod "02ebbe65-7251-4d78-987e-c87d272e2c39" (UID: "02ebbe65-7251-4d78-987e-c87d272e2c39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.660775 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-config" (OuterVolumeSpecName: "config") pod "47637a29-4c9c-4125-8c82-0047356f3a29" (UID: "47637a29-4c9c-4125-8c82-0047356f3a29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.664807 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47637a29-4c9c-4125-8c82-0047356f3a29-kube-api-access-w77kz" (OuterVolumeSpecName: "kube-api-access-w77kz") pod "47637a29-4c9c-4125-8c82-0047356f3a29" (UID: "47637a29-4c9c-4125-8c82-0047356f3a29"). InnerVolumeSpecName "kube-api-access-w77kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.664903 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ebbe65-7251-4d78-987e-c87d272e2c39-kube-api-access-zdtfs" (OuterVolumeSpecName: "kube-api-access-zdtfs") pod "02ebbe65-7251-4d78-987e-c87d272e2c39" (UID: "02ebbe65-7251-4d78-987e-c87d272e2c39"). InnerVolumeSpecName "kube-api-access-zdtfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.665127 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47637a29-4c9c-4125-8c82-0047356f3a29-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "47637a29-4c9c-4125-8c82-0047356f3a29" (UID: "47637a29-4c9c-4125-8c82-0047356f3a29"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.668432 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ebbe65-7251-4d78-987e-c87d272e2c39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "02ebbe65-7251-4d78-987e-c87d272e2c39" (UID: "02ebbe65-7251-4d78-987e-c87d272e2c39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.759758 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-config\") pod \"route-controller-manager-5c49c7d9b9-cznkn\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.759828 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-client-ca\") pod \"route-controller-manager-5c49c7d9b9-cznkn\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.759898 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-serving-cert\") pod \"route-controller-manager-5c49c7d9b9-cznkn\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.760190 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89m85\" (UniqueName: \"kubernetes.io/projected/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-kube-api-access-89m85\") pod \"route-controller-manager-5c49c7d9b9-cznkn\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.760443 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47637a29-4c9c-4125-8c82-0047356f3a29-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.760479 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02ebbe65-7251-4d78-987e-c87d272e2c39-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.760501 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.760526 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdtfs\" (UniqueName: \"kubernetes.io/projected/02ebbe65-7251-4d78-987e-c87d272e2c39-kube-api-access-zdtfs\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.760545 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.760564 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02ebbe65-7251-4d78-987e-c87d272e2c39-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.760581 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47637a29-4c9c-4125-8c82-0047356f3a29-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.760600 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02ebbe65-7251-4d78-987e-c87d272e2c39-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.760619 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w77kz\" (UniqueName: \"kubernetes.io/projected/47637a29-4c9c-4125-8c82-0047356f3a29-kube-api-access-w77kz\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.861356 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-serving-cert\") pod \"route-controller-manager-5c49c7d9b9-cznkn\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.861529 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89m85\" (UniqueName: \"kubernetes.io/projected/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-kube-api-access-89m85\") pod \"route-controller-manager-5c49c7d9b9-cznkn\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.861582 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-config\") pod \"route-controller-manager-5c49c7d9b9-cznkn\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.861606 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-client-ca\") pod \"route-controller-manager-5c49c7d9b9-cznkn\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.862761 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-client-ca\") pod \"route-controller-manager-5c49c7d9b9-cznkn\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.864542 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-config\") pod \"route-controller-manager-5c49c7d9b9-cznkn\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.868853 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-serving-cert\") pod \"route-controller-manager-5c49c7d9b9-cznkn\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:36 crc kubenswrapper[5050]: I0131 05:26:36.891145 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89m85\" (UniqueName: \"kubernetes.io/projected/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-kube-api-access-89m85\") pod \"route-controller-manager-5c49c7d9b9-cznkn\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.039648 5050 generic.go:334] "Generic (PLEG): container finished" podID="47637a29-4c9c-4125-8c82-0047356f3a29" containerID="d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c" exitCode=0 Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.039740 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" event={"ID":"47637a29-4c9c-4125-8c82-0047356f3a29","Type":"ContainerDied","Data":"d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c"} Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.039845 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" event={"ID":"47637a29-4c9c-4125-8c82-0047356f3a29","Type":"ContainerDied","Data":"ec343ed1a502ac2f6b86638399cb865c8aed93a3fbce5f038bb4ed52890206f5"} Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.039784 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7974fc8c86-sfx8c" Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.039882 5050 scope.go:117] "RemoveContainer" containerID="d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c" Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.049772 5050 generic.go:334] "Generic (PLEG): container finished" podID="02ebbe65-7251-4d78-987e-c87d272e2c39" containerID="5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978" exitCode=0 Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.049835 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" event={"ID":"02ebbe65-7251-4d78-987e-c87d272e2c39","Type":"ContainerDied","Data":"5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978"} Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.049874 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" event={"ID":"02ebbe65-7251-4d78-987e-c87d272e2c39","Type":"ContainerDied","Data":"d8cf1e92637211eca1a08cb03c7a5e12d2aef66c6726fa061922d3c43be8f5e6"} Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.050013 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t" Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.071148 5050 scope.go:117] "RemoveContainer" containerID="d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c" Jan 31 05:26:37 crc kubenswrapper[5050]: E0131 05:26:37.071893 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c\": container with ID starting with d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c not found: ID does not exist" containerID="d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c" Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.071946 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c"} err="failed to get container status \"d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c\": rpc error: code = NotFound desc = could not find container \"d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c\": container with ID starting with d335aa8c5319fee73edfe1b9c4e81634e479daf59349daa61c269dc85c942f3c not found: ID does not exist" Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.072001 5050 scope.go:117] "RemoveContainer" containerID="5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978" Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.103403 5050 scope.go:117] "RemoveContainer" containerID="5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978" Jan 31 05:26:37 crc kubenswrapper[5050]: E0131 05:26:37.106360 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978\": container with ID starting with 5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978 not found: ID does not exist" containerID="5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978" Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.106426 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978"} err="failed to get container status \"5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978\": rpc error: code = NotFound desc = could not find container \"5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978\": container with ID starting with 5fb737c36385f74bd1a28c47178a53b75065b307b60a6a564352eae4a564f978 not found: ID does not exist" Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.115563 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7974fc8c86-sfx8c"] Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.124828 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7974fc8c86-sfx8c"] Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.128990 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t"] Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.132899 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646c6488b5-5lp7t"] Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.187994 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.450920 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn"] Jan 31 05:26:37 crc kubenswrapper[5050]: W0131 05:26:37.455268 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ce0d54c_e6bb_40f9_a4bf_e058bed47571.slice/crio-0dc527c2dda199f77a76f01487e0343ca804dbf683a2e236883e12b8611bdb3b WatchSource:0}: Error finding container 0dc527c2dda199f77a76f01487e0343ca804dbf683a2e236883e12b8611bdb3b: Status 404 returned error can't find the container with id 0dc527c2dda199f77a76f01487e0343ca804dbf683a2e236883e12b8611bdb3b Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.747548 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02ebbe65-7251-4d78-987e-c87d272e2c39" path="/var/lib/kubelet/pods/02ebbe65-7251-4d78-987e-c87d272e2c39/volumes" Jan 31 05:26:37 crc kubenswrapper[5050]: I0131 05:26:37.748781 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47637a29-4c9c-4125-8c82-0047356f3a29" path="/var/lib/kubelet/pods/47637a29-4c9c-4125-8c82-0047356f3a29/volumes" Jan 31 05:26:38 crc kubenswrapper[5050]: I0131 05:26:38.060523 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" event={"ID":"7ce0d54c-e6bb-40f9-a4bf-e058bed47571","Type":"ContainerStarted","Data":"1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8"} Jan 31 05:26:38 crc kubenswrapper[5050]: I0131 05:26:38.061198 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:38 crc kubenswrapper[5050]: I0131 05:26:38.061231 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" event={"ID":"7ce0d54c-e6bb-40f9-a4bf-e058bed47571","Type":"ContainerStarted","Data":"0dc527c2dda199f77a76f01487e0343ca804dbf683a2e236883e12b8611bdb3b"} Jan 31 05:26:38 crc kubenswrapper[5050]: I0131 05:26:38.076264 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:38 crc kubenswrapper[5050]: I0131 05:26:38.078602 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" podStartSLOduration=4.078586947 podStartE2EDuration="4.078586947s" podCreationTimestamp="2026-01-31 05:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:26:38.076409749 +0000 UTC m=+323.125571395" watchObservedRunningTime="2026-01-31 05:26:38.078586947 +0000 UTC m=+323.127748533" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.183976 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8"] Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.184701 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.186560 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.188485 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.188939 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.189218 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.189354 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.190519 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.198372 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.203480 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8"] Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.295601 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-client-ca\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.295653 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-proxy-ca-bundles\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.295708 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e9569fc-3811-4dd4-9433-23c26eeec997-serving-cert\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.295765 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-config\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.295783 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxs2b\" (UniqueName: \"kubernetes.io/projected/3e9569fc-3811-4dd4-9433-23c26eeec997-kube-api-access-lxs2b\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.398779 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxs2b\" (UniqueName: \"kubernetes.io/projected/3e9569fc-3811-4dd4-9433-23c26eeec997-kube-api-access-lxs2b\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.398849 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-config\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.398977 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-client-ca\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.399017 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-proxy-ca-bundles\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.399051 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e9569fc-3811-4dd4-9433-23c26eeec997-serving-cert\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.400548 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-client-ca\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.400757 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-proxy-ca-bundles\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.401169 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-config\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.407827 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e9569fc-3811-4dd4-9433-23c26eeec997-serving-cert\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.419252 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxs2b\" (UniqueName: \"kubernetes.io/projected/3e9569fc-3811-4dd4-9433-23c26eeec997-kube-api-access-lxs2b\") pod \"controller-manager-7f6f6b4787-pjpb8\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.537240 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:39 crc kubenswrapper[5050]: I0131 05:26:39.753365 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8"] Jan 31 05:26:40 crc kubenswrapper[5050]: I0131 05:26:40.074148 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" event={"ID":"3e9569fc-3811-4dd4-9433-23c26eeec997","Type":"ContainerStarted","Data":"7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4"} Jan 31 05:26:40 crc kubenswrapper[5050]: I0131 05:26:40.074487 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" event={"ID":"3e9569fc-3811-4dd4-9433-23c26eeec997","Type":"ContainerStarted","Data":"45ee53b8a7f08e02dd229186a6613b008ea2da4bd678161c228fd8917b039bcb"} Jan 31 05:26:40 crc kubenswrapper[5050]: I0131 05:26:40.074521 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:40 crc kubenswrapper[5050]: I0131 05:26:40.085404 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:40 crc kubenswrapper[5050]: I0131 05:26:40.121897 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" podStartSLOduration=6.121874993 podStartE2EDuration="6.121874993s" podCreationTimestamp="2026-01-31 05:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:26:40.097406347 +0000 UTC m=+325.146567953" watchObservedRunningTime="2026-01-31 05:26:40.121874993 +0000 UTC m=+325.171036589" Jan 31 05:26:40 crc kubenswrapper[5050]: I0131 05:26:40.567875 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8"] Jan 31 05:26:40 crc kubenswrapper[5050]: I0131 05:26:40.592698 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn"] Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.078926 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" podUID="7ce0d54c-e6bb-40f9-a4bf-e058bed47571" containerName="route-controller-manager" containerID="cri-o://1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8" gracePeriod=30 Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.488301 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.641433 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-serving-cert\") pod \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.641517 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-client-ca\") pod \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.641666 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89m85\" (UniqueName: \"kubernetes.io/projected/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-kube-api-access-89m85\") pod \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.641699 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-config\") pod \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\" (UID: \"7ce0d54c-e6bb-40f9-a4bf-e058bed47571\") " Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.642082 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-client-ca" (OuterVolumeSpecName: "client-ca") pod "7ce0d54c-e6bb-40f9-a4bf-e058bed47571" (UID: "7ce0d54c-e6bb-40f9-a4bf-e058bed47571"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.642374 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-config" (OuterVolumeSpecName: "config") pod "7ce0d54c-e6bb-40f9-a4bf-e058bed47571" (UID: "7ce0d54c-e6bb-40f9-a4bf-e058bed47571"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.646619 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7ce0d54c-e6bb-40f9-a4bf-e058bed47571" (UID: "7ce0d54c-e6bb-40f9-a4bf-e058bed47571"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.647824 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-kube-api-access-89m85" (OuterVolumeSpecName: "kube-api-access-89m85") pod "7ce0d54c-e6bb-40f9-a4bf-e058bed47571" (UID: "7ce0d54c-e6bb-40f9-a4bf-e058bed47571"). InnerVolumeSpecName "kube-api-access-89m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.743252 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.743299 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.743320 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89m85\" (UniqueName: \"kubernetes.io/projected/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-kube-api-access-89m85\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:41 crc kubenswrapper[5050]: I0131 05:26:41.743345 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce0d54c-e6bb-40f9-a4bf-e058bed47571-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.087380 5050 generic.go:334] "Generic (PLEG): container finished" podID="7ce0d54c-e6bb-40f9-a4bf-e058bed47571" containerID="1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8" exitCode=0 Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.087431 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" event={"ID":"7ce0d54c-e6bb-40f9-a4bf-e058bed47571","Type":"ContainerDied","Data":"1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8"} Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.087516 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" event={"ID":"7ce0d54c-e6bb-40f9-a4bf-e058bed47571","Type":"ContainerDied","Data":"0dc527c2dda199f77a76f01487e0343ca804dbf683a2e236883e12b8611bdb3b"} Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.087543 5050 scope.go:117] "RemoveContainer" containerID="1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.087461 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.087571 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" podUID="3e9569fc-3811-4dd4-9433-23c26eeec997" containerName="controller-manager" containerID="cri-o://7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4" gracePeriod=30 Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.110264 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn"] Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.111276 5050 scope.go:117] "RemoveContainer" containerID="1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8" Jan 31 05:26:42 crc kubenswrapper[5050]: E0131 05:26:42.111799 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8\": container with ID starting with 1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8 not found: ID does not exist" containerID="1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.111845 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8"} err="failed to get container status \"1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8\": rpc error: code = NotFound desc = could not find container \"1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8\": container with ID starting with 1fd0cad5071b7967fc4a69e0180941c7965f644a532adc94f1468520d2dd91d8 not found: ID does not exist" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.113212 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-cznkn"] Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.184353 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn"] Jan 31 05:26:42 crc kubenswrapper[5050]: E0131 05:26:42.184601 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce0d54c-e6bb-40f9-a4bf-e058bed47571" containerName="route-controller-manager" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.184619 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce0d54c-e6bb-40f9-a4bf-e058bed47571" containerName="route-controller-manager" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.184750 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce0d54c-e6bb-40f9-a4bf-e058bed47571" containerName="route-controller-manager" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.185230 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.187223 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.187322 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.187620 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.187713 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.187943 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.188031 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.211044 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn"] Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.350298 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bce78f-8a53-4291-83c9-2d92bfe138bf-config\") pod \"route-controller-manager-b95bb48c6-9fljn\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.350447 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bce78f-8a53-4291-83c9-2d92bfe138bf-serving-cert\") pod \"route-controller-manager-b95bb48c6-9fljn\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.350624 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32bce78f-8a53-4291-83c9-2d92bfe138bf-client-ca\") pod \"route-controller-manager-b95bb48c6-9fljn\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.350658 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8nqf\" (UniqueName: \"kubernetes.io/projected/32bce78f-8a53-4291-83c9-2d92bfe138bf-kube-api-access-f8nqf\") pod \"route-controller-manager-b95bb48c6-9fljn\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.454852 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bce78f-8a53-4291-83c9-2d92bfe138bf-serving-cert\") pod \"route-controller-manager-b95bb48c6-9fljn\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.454944 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32bce78f-8a53-4291-83c9-2d92bfe138bf-client-ca\") pod \"route-controller-manager-b95bb48c6-9fljn\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.454994 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8nqf\" (UniqueName: \"kubernetes.io/projected/32bce78f-8a53-4291-83c9-2d92bfe138bf-kube-api-access-f8nqf\") pod \"route-controller-manager-b95bb48c6-9fljn\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.455023 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bce78f-8a53-4291-83c9-2d92bfe138bf-config\") pod \"route-controller-manager-b95bb48c6-9fljn\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.456386 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bce78f-8a53-4291-83c9-2d92bfe138bf-config\") pod \"route-controller-manager-b95bb48c6-9fljn\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.456411 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32bce78f-8a53-4291-83c9-2d92bfe138bf-client-ca\") pod \"route-controller-manager-b95bb48c6-9fljn\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.461590 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bce78f-8a53-4291-83c9-2d92bfe138bf-serving-cert\") pod \"route-controller-manager-b95bb48c6-9fljn\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.480334 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8nqf\" (UniqueName: \"kubernetes.io/projected/32bce78f-8a53-4291-83c9-2d92bfe138bf-kube-api-access-f8nqf\") pod \"route-controller-manager-b95bb48c6-9fljn\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.513917 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.533683 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.657416 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e9569fc-3811-4dd4-9433-23c26eeec997-serving-cert\") pod \"3e9569fc-3811-4dd4-9433-23c26eeec997\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.657475 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-client-ca\") pod \"3e9569fc-3811-4dd4-9433-23c26eeec997\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.657517 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-config\") pod \"3e9569fc-3811-4dd4-9433-23c26eeec997\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.657607 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-proxy-ca-bundles\") pod \"3e9569fc-3811-4dd4-9433-23c26eeec997\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.657673 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxs2b\" (UniqueName: \"kubernetes.io/projected/3e9569fc-3811-4dd4-9433-23c26eeec997-kube-api-access-lxs2b\") pod \"3e9569fc-3811-4dd4-9433-23c26eeec997\" (UID: \"3e9569fc-3811-4dd4-9433-23c26eeec997\") " Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.658445 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-client-ca" (OuterVolumeSpecName: "client-ca") pod "3e9569fc-3811-4dd4-9433-23c26eeec997" (UID: "3e9569fc-3811-4dd4-9433-23c26eeec997"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.658813 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-config" (OuterVolumeSpecName: "config") pod "3e9569fc-3811-4dd4-9433-23c26eeec997" (UID: "3e9569fc-3811-4dd4-9433-23c26eeec997"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.659227 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3e9569fc-3811-4dd4-9433-23c26eeec997" (UID: "3e9569fc-3811-4dd4-9433-23c26eeec997"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.671299 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9569fc-3811-4dd4-9433-23c26eeec997-kube-api-access-lxs2b" (OuterVolumeSpecName: "kube-api-access-lxs2b") pod "3e9569fc-3811-4dd4-9433-23c26eeec997" (UID: "3e9569fc-3811-4dd4-9433-23c26eeec997"). InnerVolumeSpecName "kube-api-access-lxs2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.671734 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9569fc-3811-4dd4-9433-23c26eeec997-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3e9569fc-3811-4dd4-9433-23c26eeec997" (UID: "3e9569fc-3811-4dd4-9433-23c26eeec997"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.751699 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn"] Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.758509 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.758538 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.758552 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e9569fc-3811-4dd4-9433-23c26eeec997-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.758566 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxs2b\" (UniqueName: \"kubernetes.io/projected/3e9569fc-3811-4dd4-9433-23c26eeec997-kube-api-access-lxs2b\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:42 crc kubenswrapper[5050]: I0131 05:26:42.758579 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e9569fc-3811-4dd4-9433-23c26eeec997-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.094927 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" event={"ID":"32bce78f-8a53-4291-83c9-2d92bfe138bf","Type":"ContainerStarted","Data":"c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139"} Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.095335 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" event={"ID":"32bce78f-8a53-4291-83c9-2d92bfe138bf","Type":"ContainerStarted","Data":"08b5774f366ddd4b6c5a68b372cd0c11bf4810b513be168ee704c685837d550d"} Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.095360 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.096333 5050 generic.go:334] "Generic (PLEG): container finished" podID="3e9569fc-3811-4dd4-9433-23c26eeec997" containerID="7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4" exitCode=0 Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.096414 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" event={"ID":"3e9569fc-3811-4dd4-9433-23c26eeec997","Type":"ContainerDied","Data":"7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4"} Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.096447 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" event={"ID":"3e9569fc-3811-4dd4-9433-23c26eeec997","Type":"ContainerDied","Data":"45ee53b8a7f08e02dd229186a6613b008ea2da4bd678161c228fd8917b039bcb"} Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.096468 5050 scope.go:117] "RemoveContainer" containerID="7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4" Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.096592 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8" Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.120352 5050 scope.go:117] "RemoveContainer" containerID="7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4" Jan 31 05:26:43 crc kubenswrapper[5050]: E0131 05:26:43.123248 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4\": container with ID starting with 7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4 not found: ID does not exist" containerID="7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4" Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.123307 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4"} err="failed to get container status \"7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4\": rpc error: code = NotFound desc = could not find container \"7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4\": container with ID starting with 7ebfa547cb900f6823653eeb9cb0880949332d749ed25395a28051e6d2fad1a4 not found: ID does not exist" Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.140491 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" podStartSLOduration=3.140467156 podStartE2EDuration="3.140467156s" podCreationTimestamp="2026-01-31 05:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:26:43.123644852 +0000 UTC m=+328.172806488" watchObservedRunningTime="2026-01-31 05:26:43.140467156 +0000 UTC m=+328.189628792" Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.141641 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8"] Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.147683 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f6f6b4787-pjpb8"] Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.601672 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.743257 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e9569fc-3811-4dd4-9433-23c26eeec997" path="/var/lib/kubelet/pods/3e9569fc-3811-4dd4-9433-23c26eeec997/volumes" Jan 31 05:26:43 crc kubenswrapper[5050]: I0131 05:26:43.743760 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce0d54c-e6bb-40f9-a4bf-e058bed47571" path="/var/lib/kubelet/pods/7ce0d54c-e6bb-40f9-a4bf-e058bed47571/volumes" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.201613 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f48b577bc-s5wnh"] Jan 31 05:26:44 crc kubenswrapper[5050]: E0131 05:26:44.202030 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9569fc-3811-4dd4-9433-23c26eeec997" containerName="controller-manager" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.202063 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9569fc-3811-4dd4-9433-23c26eeec997" containerName="controller-manager" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.202250 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9569fc-3811-4dd4-9433-23c26eeec997" containerName="controller-manager" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.202876 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.206791 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f48b577bc-s5wnh"] Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.208396 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.208758 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.209045 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.209919 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.210338 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.211244 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.218695 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.381396 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-config\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.382032 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-client-ca\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.382124 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbl5\" (UniqueName: \"kubernetes.io/projected/e91bd0c0-8f30-4b54-99be-6405c1937651-kube-api-access-dkbl5\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.382227 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-proxy-ca-bundles\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.382293 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e91bd0c0-8f30-4b54-99be-6405c1937651-serving-cert\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.483575 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-client-ca\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.483662 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkbl5\" (UniqueName: \"kubernetes.io/projected/e91bd0c0-8f30-4b54-99be-6405c1937651-kube-api-access-dkbl5\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.483713 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-proxy-ca-bundles\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.483754 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e91bd0c0-8f30-4b54-99be-6405c1937651-serving-cert\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.483838 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-config\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.484799 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-client-ca\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.485828 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-proxy-ca-bundles\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.486202 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-config\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.494998 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e91bd0c0-8f30-4b54-99be-6405c1937651-serving-cert\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.514480 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkbl5\" (UniqueName: \"kubernetes.io/projected/e91bd0c0-8f30-4b54-99be-6405c1937651-kube-api-access-dkbl5\") pod \"controller-manager-7f48b577bc-s5wnh\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.541595 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:44 crc kubenswrapper[5050]: I0131 05:26:44.804096 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f48b577bc-s5wnh"] Jan 31 05:26:45 crc kubenswrapper[5050]: I0131 05:26:45.115503 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" event={"ID":"e91bd0c0-8f30-4b54-99be-6405c1937651","Type":"ContainerStarted","Data":"7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83"} Jan 31 05:26:45 crc kubenswrapper[5050]: I0131 05:26:45.115553 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" event={"ID":"e91bd0c0-8f30-4b54-99be-6405c1937651","Type":"ContainerStarted","Data":"05af54711f3210bb8a56c23513147a3a7ca8c884d22962b209d7eb8eb2eac2df"} Jan 31 05:26:45 crc kubenswrapper[5050]: I0131 05:26:45.115924 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:45 crc kubenswrapper[5050]: I0131 05:26:45.121524 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:26:45 crc kubenswrapper[5050]: I0131 05:26:45.152762 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" podStartSLOduration=5.152739451 podStartE2EDuration="5.152739451s" podCreationTimestamp="2026-01-31 05:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:26:45.131541693 +0000 UTC m=+330.180703319" watchObservedRunningTime="2026-01-31 05:26:45.152739451 +0000 UTC m=+330.201901067" Jan 31 05:27:09 crc kubenswrapper[5050]: I0131 05:27:09.017710 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:27:09 crc kubenswrapper[5050]: I0131 05:27:09.018369 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.533321 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xphd4"] Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.535019 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.551463 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xphd4"] Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.642119 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6de5ada7-3141-44d0-a826-f31bd486b0fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.642177 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6de5ada7-3141-44d0-a826-f31bd486b0fe-trusted-ca\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.642255 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.642349 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gtv6\" (UniqueName: \"kubernetes.io/projected/6de5ada7-3141-44d0-a826-f31bd486b0fe-kube-api-access-8gtv6\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.642376 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6de5ada7-3141-44d0-a826-f31bd486b0fe-registry-certificates\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.642410 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6de5ada7-3141-44d0-a826-f31bd486b0fe-bound-sa-token\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.642477 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6de5ada7-3141-44d0-a826-f31bd486b0fe-registry-tls\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.642621 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6de5ada7-3141-44d0-a826-f31bd486b0fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.670034 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.744507 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6de5ada7-3141-44d0-a826-f31bd486b0fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.744580 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6de5ada7-3141-44d0-a826-f31bd486b0fe-trusted-ca\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.744664 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gtv6\" (UniqueName: \"kubernetes.io/projected/6de5ada7-3141-44d0-a826-f31bd486b0fe-kube-api-access-8gtv6\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.744707 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6de5ada7-3141-44d0-a826-f31bd486b0fe-registry-certificates\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.744763 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6de5ada7-3141-44d0-a826-f31bd486b0fe-bound-sa-token\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.744795 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6de5ada7-3141-44d0-a826-f31bd486b0fe-registry-tls\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.745341 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6de5ada7-3141-44d0-a826-f31bd486b0fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.745921 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6de5ada7-3141-44d0-a826-f31bd486b0fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.747158 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6de5ada7-3141-44d0-a826-f31bd486b0fe-registry-certificates\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.747260 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6de5ada7-3141-44d0-a826-f31bd486b0fe-trusted-ca\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.754179 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6de5ada7-3141-44d0-a826-f31bd486b0fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.755784 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6de5ada7-3141-44d0-a826-f31bd486b0fe-registry-tls\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.776451 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gtv6\" (UniqueName: \"kubernetes.io/projected/6de5ada7-3141-44d0-a826-f31bd486b0fe-kube-api-access-8gtv6\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.778646 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6de5ada7-3141-44d0-a826-f31bd486b0fe-bound-sa-token\") pod \"image-registry-66df7c8f76-xphd4\" (UID: \"6de5ada7-3141-44d0-a826-f31bd486b0fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:17 crc kubenswrapper[5050]: I0131 05:27:17.858258 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:18 crc kubenswrapper[5050]: I0131 05:27:18.355186 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xphd4"] Jan 31 05:27:19 crc kubenswrapper[5050]: I0131 05:27:19.341730 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" event={"ID":"6de5ada7-3141-44d0-a826-f31bd486b0fe","Type":"ContainerStarted","Data":"93be51e44c782cc676a6cea0e68dbd2d6c3425a48040551c46027c4afbb3bba7"} Jan 31 05:27:19 crc kubenswrapper[5050]: I0131 05:27:19.342144 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" event={"ID":"6de5ada7-3141-44d0-a826-f31bd486b0fe","Type":"ContainerStarted","Data":"70632c1fe7f260cdb61ca4c40971c1c6f55e102e10a084751acb2586666342a2"} Jan 31 05:27:19 crc kubenswrapper[5050]: I0131 05:27:19.342286 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:19 crc kubenswrapper[5050]: I0131 05:27:19.370817 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" podStartSLOduration=2.370780135 podStartE2EDuration="2.370780135s" podCreationTimestamp="2026-01-31 05:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:27:19.368203894 +0000 UTC m=+364.417365530" watchObservedRunningTime="2026-01-31 05:27:19.370780135 +0000 UTC m=+364.419941781" Jan 31 05:27:30 crc kubenswrapper[5050]: I0131 05:27:30.673425 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f48b577bc-s5wnh"] Jan 31 05:27:30 crc kubenswrapper[5050]: I0131 05:27:30.674463 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" podUID="e91bd0c0-8f30-4b54-99be-6405c1937651" containerName="controller-manager" containerID="cri-o://7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83" gracePeriod=30 Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.110904 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.241679 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-config\") pod \"e91bd0c0-8f30-4b54-99be-6405c1937651\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.241760 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e91bd0c0-8f30-4b54-99be-6405c1937651-serving-cert\") pod \"e91bd0c0-8f30-4b54-99be-6405c1937651\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.241835 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkbl5\" (UniqueName: \"kubernetes.io/projected/e91bd0c0-8f30-4b54-99be-6405c1937651-kube-api-access-dkbl5\") pod \"e91bd0c0-8f30-4b54-99be-6405c1937651\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.241880 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-client-ca\") pod \"e91bd0c0-8f30-4b54-99be-6405c1937651\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.242705 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e91bd0c0-8f30-4b54-99be-6405c1937651" (UID: "e91bd0c0-8f30-4b54-99be-6405c1937651"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.242725 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-config" (OuterVolumeSpecName: "config") pod "e91bd0c0-8f30-4b54-99be-6405c1937651" (UID: "e91bd0c0-8f30-4b54-99be-6405c1937651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.242752 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-client-ca" (OuterVolumeSpecName: "client-ca") pod "e91bd0c0-8f30-4b54-99be-6405c1937651" (UID: "e91bd0c0-8f30-4b54-99be-6405c1937651"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.241940 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-proxy-ca-bundles\") pod \"e91bd0c0-8f30-4b54-99be-6405c1937651\" (UID: \"e91bd0c0-8f30-4b54-99be-6405c1937651\") " Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.243424 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.243474 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.243498 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e91bd0c0-8f30-4b54-99be-6405c1937651-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.249758 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91bd0c0-8f30-4b54-99be-6405c1937651-kube-api-access-dkbl5" (OuterVolumeSpecName: "kube-api-access-dkbl5") pod "e91bd0c0-8f30-4b54-99be-6405c1937651" (UID: "e91bd0c0-8f30-4b54-99be-6405c1937651"). InnerVolumeSpecName "kube-api-access-dkbl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.249798 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e91bd0c0-8f30-4b54-99be-6405c1937651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e91bd0c0-8f30-4b54-99be-6405c1937651" (UID: "e91bd0c0-8f30-4b54-99be-6405c1937651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.345726 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e91bd0c0-8f30-4b54-99be-6405c1937651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.345784 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkbl5\" (UniqueName: \"kubernetes.io/projected/e91bd0c0-8f30-4b54-99be-6405c1937651-kube-api-access-dkbl5\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.407926 5050 generic.go:334] "Generic (PLEG): container finished" podID="e91bd0c0-8f30-4b54-99be-6405c1937651" containerID="7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83" exitCode=0 Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.407994 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" event={"ID":"e91bd0c0-8f30-4b54-99be-6405c1937651","Type":"ContainerDied","Data":"7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83"} Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.408024 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" event={"ID":"e91bd0c0-8f30-4b54-99be-6405c1937651","Type":"ContainerDied","Data":"05af54711f3210bb8a56c23513147a3a7ca8c884d22962b209d7eb8eb2eac2df"} Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.408028 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f48b577bc-s5wnh" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.408043 5050 scope.go:117] "RemoveContainer" containerID="7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.425860 5050 scope.go:117] "RemoveContainer" containerID="7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83" Jan 31 05:27:31 crc kubenswrapper[5050]: E0131 05:27:31.426779 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83\": container with ID starting with 7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83 not found: ID does not exist" containerID="7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.426822 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83"} err="failed to get container status \"7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83\": rpc error: code = NotFound desc = could not find container \"7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83\": container with ID starting with 7317d96b35c35ebcd9db94714c52353ce7fc74127a7ef4c34e7c296894783d83 not found: ID does not exist" Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.449922 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f48b577bc-s5wnh"] Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.454796 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f48b577bc-s5wnh"] Jan 31 05:27:31 crc kubenswrapper[5050]: I0131 05:27:31.748229 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e91bd0c0-8f30-4b54-99be-6405c1937651" path="/var/lib/kubelet/pods/e91bd0c0-8f30-4b54-99be-6405c1937651/volumes" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.222170 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f6f6b4787-ghthv"] Jan 31 05:27:32 crc kubenswrapper[5050]: E0131 05:27:32.222504 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91bd0c0-8f30-4b54-99be-6405c1937651" containerName="controller-manager" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.222523 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91bd0c0-8f30-4b54-99be-6405c1937651" containerName="controller-manager" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.222652 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e91bd0c0-8f30-4b54-99be-6405c1937651" containerName="controller-manager" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.223227 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.232730 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.233479 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.234374 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.235200 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.235235 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.235242 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.236007 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f6f6b4787-ghthv"] Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.249168 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.365347 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5thc2\" (UniqueName: \"kubernetes.io/projected/57e37147-2ca3-46dc-bcec-ec97908724ff-kube-api-access-5thc2\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.365409 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57e37147-2ca3-46dc-bcec-ec97908724ff-proxy-ca-bundles\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.365458 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57e37147-2ca3-46dc-bcec-ec97908724ff-serving-cert\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.365505 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e37147-2ca3-46dc-bcec-ec97908724ff-config\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.365565 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57e37147-2ca3-46dc-bcec-ec97908724ff-client-ca\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.466596 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57e37147-2ca3-46dc-bcec-ec97908724ff-serving-cert\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.466664 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e37147-2ca3-46dc-bcec-ec97908724ff-config\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.466727 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57e37147-2ca3-46dc-bcec-ec97908724ff-client-ca\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.466768 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5thc2\" (UniqueName: \"kubernetes.io/projected/57e37147-2ca3-46dc-bcec-ec97908724ff-kube-api-access-5thc2\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.466795 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57e37147-2ca3-46dc-bcec-ec97908724ff-proxy-ca-bundles\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.467936 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57e37147-2ca3-46dc-bcec-ec97908724ff-proxy-ca-bundles\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.469003 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57e37147-2ca3-46dc-bcec-ec97908724ff-client-ca\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.469561 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e37147-2ca3-46dc-bcec-ec97908724ff-config\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.471470 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57e37147-2ca3-46dc-bcec-ec97908724ff-serving-cert\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.498242 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5thc2\" (UniqueName: \"kubernetes.io/projected/57e37147-2ca3-46dc-bcec-ec97908724ff-kube-api-access-5thc2\") pod \"controller-manager-7f6f6b4787-ghthv\" (UID: \"57e37147-2ca3-46dc-bcec-ec97908724ff\") " pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.549985 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:32 crc kubenswrapper[5050]: I0131 05:27:32.982397 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f6f6b4787-ghthv"] Jan 31 05:27:32 crc kubenswrapper[5050]: W0131 05:27:32.992967 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57e37147_2ca3_46dc_bcec_ec97908724ff.slice/crio-c298fcce09a2c86324af8819d1f77aec1ff7f33430e078bccfedec13d5a77eab WatchSource:0}: Error finding container c298fcce09a2c86324af8819d1f77aec1ff7f33430e078bccfedec13d5a77eab: Status 404 returned error can't find the container with id c298fcce09a2c86324af8819d1f77aec1ff7f33430e078bccfedec13d5a77eab Jan 31 05:27:33 crc kubenswrapper[5050]: I0131 05:27:33.423776 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" event={"ID":"57e37147-2ca3-46dc-bcec-ec97908724ff","Type":"ContainerStarted","Data":"5f9db6ac8158f456ad6755dd5118c401896a41726659948294a6bf0ea3f510f5"} Jan 31 05:27:33 crc kubenswrapper[5050]: I0131 05:27:33.424073 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" event={"ID":"57e37147-2ca3-46dc-bcec-ec97908724ff","Type":"ContainerStarted","Data":"c298fcce09a2c86324af8819d1f77aec1ff7f33430e078bccfedec13d5a77eab"} Jan 31 05:27:33 crc kubenswrapper[5050]: I0131 05:27:33.424284 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:33 crc kubenswrapper[5050]: I0131 05:27:33.429369 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" Jan 31 05:27:33 crc kubenswrapper[5050]: I0131 05:27:33.450721 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f6f6b4787-ghthv" podStartSLOduration=3.450683115 podStartE2EDuration="3.450683115s" podCreationTimestamp="2026-01-31 05:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:27:33.446426445 +0000 UTC m=+378.495588051" watchObservedRunningTime="2026-01-31 05:27:33.450683115 +0000 UTC m=+378.499844751" Jan 31 05:27:37 crc kubenswrapper[5050]: I0131 05:27:37.868473 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xphd4" Jan 31 05:27:37 crc kubenswrapper[5050]: I0131 05:27:37.932926 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8mvp9"] Jan 31 05:27:39 crc kubenswrapper[5050]: I0131 05:27:39.018012 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:27:39 crc kubenswrapper[5050]: I0131 05:27:39.018079 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.672125 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnvhs"] Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.673215 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tnvhs" podUID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" containerName="registry-server" containerID="cri-o://8a717fc578b95a9f6518121fda39ad508f76dbcc14a8531d8cc20d5a7770e036" gracePeriod=30 Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.675934 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zdgsp"] Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.676214 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zdgsp" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" containerName="registry-server" containerID="cri-o://db5bec45b02d2153a7e8f5d1eb6102de1911bda4b253206ab36ca4f92df33af3" gracePeriod=30 Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.689085 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g9jhn"] Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.689477 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" podUID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" containerName="marketplace-operator" containerID="cri-o://cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e" gracePeriod=30 Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.706377 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m29pg"] Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.706648 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m29pg" podUID="efd09525-8724-4184-9311-f2dd52139a81" containerName="registry-server" containerID="cri-o://7c0032ec02d6d5ab12f383ca9454e86cd7f6eef2c446af1a1b3d42f9f0079dcb" gracePeriod=30 Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.721676 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qmfcw"] Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.724441 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qmfcw" podUID="1bdc621b-09b4-43de-921b-be2322174c79" containerName="registry-server" containerID="cri-o://43c87a05da20e71455c8bce95724b93579140af830ce4f57adbaad58c08d725a" gracePeriod=30 Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.775881 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g9x8x"] Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.776708 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g9x8x"] Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.776774 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.791049 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0511caf5-aa17-47ef-b30c-3ba05ec0b8dc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g9x8x\" (UID: \"0511caf5-aa17-47ef-b30c-3ba05ec0b8dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.791178 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0511caf5-aa17-47ef-b30c-3ba05ec0b8dc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g9x8x\" (UID: \"0511caf5-aa17-47ef-b30c-3ba05ec0b8dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.791254 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f66z\" (UniqueName: \"kubernetes.io/projected/0511caf5-aa17-47ef-b30c-3ba05ec0b8dc-kube-api-access-9f66z\") pod \"marketplace-operator-79b997595-g9x8x\" (UID: \"0511caf5-aa17-47ef-b30c-3ba05ec0b8dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.892397 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f66z\" (UniqueName: \"kubernetes.io/projected/0511caf5-aa17-47ef-b30c-3ba05ec0b8dc-kube-api-access-9f66z\") pod \"marketplace-operator-79b997595-g9x8x\" (UID: \"0511caf5-aa17-47ef-b30c-3ba05ec0b8dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.892483 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0511caf5-aa17-47ef-b30c-3ba05ec0b8dc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g9x8x\" (UID: \"0511caf5-aa17-47ef-b30c-3ba05ec0b8dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.892522 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0511caf5-aa17-47ef-b30c-3ba05ec0b8dc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g9x8x\" (UID: \"0511caf5-aa17-47ef-b30c-3ba05ec0b8dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.893743 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0511caf5-aa17-47ef-b30c-3ba05ec0b8dc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g9x8x\" (UID: \"0511caf5-aa17-47ef-b30c-3ba05ec0b8dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.905723 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0511caf5-aa17-47ef-b30c-3ba05ec0b8dc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g9x8x\" (UID: \"0511caf5-aa17-47ef-b30c-3ba05ec0b8dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:46 crc kubenswrapper[5050]: I0131 05:27:46.915331 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f66z\" (UniqueName: \"kubernetes.io/projected/0511caf5-aa17-47ef-b30c-3ba05ec0b8dc-kube-api-access-9f66z\") pod \"marketplace-operator-79b997595-g9x8x\" (UID: \"0511caf5-aa17-47ef-b30c-3ba05ec0b8dc\") " pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:47 crc kubenswrapper[5050]: E0131 05:27:47.156226 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 43c87a05da20e71455c8bce95724b93579140af830ce4f57adbaad58c08d725a is running failed: container process not found" containerID="43c87a05da20e71455c8bce95724b93579140af830ce4f57adbaad58c08d725a" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 05:27:47 crc kubenswrapper[5050]: E0131 05:27:47.157045 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 43c87a05da20e71455c8bce95724b93579140af830ce4f57adbaad58c08d725a is running failed: container process not found" containerID="43c87a05da20e71455c8bce95724b93579140af830ce4f57adbaad58c08d725a" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 05:27:47 crc kubenswrapper[5050]: E0131 05:27:47.157439 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 43c87a05da20e71455c8bce95724b93579140af830ce4f57adbaad58c08d725a is running failed: container process not found" containerID="43c87a05da20e71455c8bce95724b93579140af830ce4f57adbaad58c08d725a" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 05:27:47 crc kubenswrapper[5050]: E0131 05:27:47.157498 5050 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 43c87a05da20e71455c8bce95724b93579140af830ce4f57adbaad58c08d725a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-qmfcw" podUID="1bdc621b-09b4-43de-921b-be2322174c79" containerName="registry-server" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.246424 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.250365 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.397684 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftw82\" (UniqueName: \"kubernetes.io/projected/a8c36ad8-2c55-41d9-8bcc-8accc3501626-kube-api-access-ftw82\") pod \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\" (UID: \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\") " Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.398021 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8c36ad8-2c55-41d9-8bcc-8accc3501626-marketplace-trusted-ca\") pod \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\" (UID: \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\") " Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.398160 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8c36ad8-2c55-41d9-8bcc-8accc3501626-marketplace-operator-metrics\") pod \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\" (UID: \"a8c36ad8-2c55-41d9-8bcc-8accc3501626\") " Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.399443 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8c36ad8-2c55-41d9-8bcc-8accc3501626-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a8c36ad8-2c55-41d9-8bcc-8accc3501626" (UID: "a8c36ad8-2c55-41d9-8bcc-8accc3501626"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.403157 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c36ad8-2c55-41d9-8bcc-8accc3501626-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a8c36ad8-2c55-41d9-8bcc-8accc3501626" (UID: "a8c36ad8-2c55-41d9-8bcc-8accc3501626"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.404033 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c36ad8-2c55-41d9-8bcc-8accc3501626-kube-api-access-ftw82" (OuterVolumeSpecName: "kube-api-access-ftw82") pod "a8c36ad8-2c55-41d9-8bcc-8accc3501626" (UID: "a8c36ad8-2c55-41d9-8bcc-8accc3501626"). InnerVolumeSpecName "kube-api-access-ftw82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.499285 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftw82\" (UniqueName: \"kubernetes.io/projected/a8c36ad8-2c55-41d9-8bcc-8accc3501626-kube-api-access-ftw82\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.499307 5050 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8c36ad8-2c55-41d9-8bcc-8accc3501626-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.499317 5050 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8c36ad8-2c55-41d9-8bcc-8accc3501626-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.520129 5050 generic.go:334] "Generic (PLEG): container finished" podID="1bdc621b-09b4-43de-921b-be2322174c79" containerID="43c87a05da20e71455c8bce95724b93579140af830ce4f57adbaad58c08d725a" exitCode=0 Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.520240 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmfcw" event={"ID":"1bdc621b-09b4-43de-921b-be2322174c79","Type":"ContainerDied","Data":"43c87a05da20e71455c8bce95724b93579140af830ce4f57adbaad58c08d725a"} Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.522477 5050 generic.go:334] "Generic (PLEG): container finished" podID="f2a80941-a665-4ff2-8f03-841e88b654cc" containerID="db5bec45b02d2153a7e8f5d1eb6102de1911bda4b253206ab36ca4f92df33af3" exitCode=0 Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.522543 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdgsp" event={"ID":"f2a80941-a665-4ff2-8f03-841e88b654cc","Type":"ContainerDied","Data":"db5bec45b02d2153a7e8f5d1eb6102de1911bda4b253206ab36ca4f92df33af3"} Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.526619 5050 generic.go:334] "Generic (PLEG): container finished" podID="efd09525-8724-4184-9311-f2dd52139a81" containerID="7c0032ec02d6d5ab12f383ca9454e86cd7f6eef2c446af1a1b3d42f9f0079dcb" exitCode=0 Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.526694 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m29pg" event={"ID":"efd09525-8724-4184-9311-f2dd52139a81","Type":"ContainerDied","Data":"7c0032ec02d6d5ab12f383ca9454e86cd7f6eef2c446af1a1b3d42f9f0079dcb"} Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.529754 5050 generic.go:334] "Generic (PLEG): container finished" podID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" containerID="8a717fc578b95a9f6518121fda39ad508f76dbcc14a8531d8cc20d5a7770e036" exitCode=0 Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.529868 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnvhs" event={"ID":"29fd7267-f00e-4b58-bdab-55bf2d0c801c","Type":"ContainerDied","Data":"8a717fc578b95a9f6518121fda39ad508f76dbcc14a8531d8cc20d5a7770e036"} Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.532198 5050 generic.go:334] "Generic (PLEG): container finished" podID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" containerID="cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e" exitCode=0 Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.532234 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" event={"ID":"a8c36ad8-2c55-41d9-8bcc-8accc3501626","Type":"ContainerDied","Data":"cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e"} Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.532254 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" event={"ID":"a8c36ad8-2c55-41d9-8bcc-8accc3501626","Type":"ContainerDied","Data":"f6b330752c43e715b80cef783450ac191c381094c08068a01ca8875b3b943a5b"} Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.532277 5050 scope.go:117] "RemoveContainer" containerID="cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.532531 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g9jhn" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.560848 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g9jhn"] Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.563480 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g9jhn"] Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.564343 5050 scope.go:117] "RemoveContainer" containerID="c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.602896 5050 scope.go:117] "RemoveContainer" containerID="cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e" Jan 31 05:27:47 crc kubenswrapper[5050]: E0131 05:27:47.603643 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e\": container with ID starting with cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e not found: ID does not exist" containerID="cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.603699 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e"} err="failed to get container status \"cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e\": rpc error: code = NotFound desc = could not find container \"cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e\": container with ID starting with cf69d8acadec07be019cbe1f1e4a7899d27290919b6f850dbc1272540b7ad91e not found: ID does not exist" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.603736 5050 scope.go:117] "RemoveContainer" containerID="c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03" Jan 31 05:27:47 crc kubenswrapper[5050]: E0131 05:27:47.604405 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03\": container with ID starting with c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03 not found: ID does not exist" containerID="c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.604442 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03"} err="failed to get container status \"c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03\": rpc error: code = NotFound desc = could not find container \"c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03\": container with ID starting with c15c6a6a5c0b1f74199149c31703772dc897a9414e4b8caf392e7573fe84ff03 not found: ID does not exist" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.641720 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g9x8x"] Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.744945 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" path="/var/lib/kubelet/pods/a8c36ad8-2c55-41d9-8bcc-8accc3501626/volumes" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.763104 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.906460 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd09525-8724-4184-9311-f2dd52139a81-utilities\") pod \"efd09525-8724-4184-9311-f2dd52139a81\" (UID: \"efd09525-8724-4184-9311-f2dd52139a81\") " Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.906823 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgmfq\" (UniqueName: \"kubernetes.io/projected/efd09525-8724-4184-9311-f2dd52139a81-kube-api-access-wgmfq\") pod \"efd09525-8724-4184-9311-f2dd52139a81\" (UID: \"efd09525-8724-4184-9311-f2dd52139a81\") " Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.906871 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd09525-8724-4184-9311-f2dd52139a81-catalog-content\") pod \"efd09525-8724-4184-9311-f2dd52139a81\" (UID: \"efd09525-8724-4184-9311-f2dd52139a81\") " Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.907140 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efd09525-8724-4184-9311-f2dd52139a81-utilities" (OuterVolumeSpecName: "utilities") pod "efd09525-8724-4184-9311-f2dd52139a81" (UID: "efd09525-8724-4184-9311-f2dd52139a81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.917136 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd09525-8724-4184-9311-f2dd52139a81-kube-api-access-wgmfq" (OuterVolumeSpecName: "kube-api-access-wgmfq") pod "efd09525-8724-4184-9311-f2dd52139a81" (UID: "efd09525-8724-4184-9311-f2dd52139a81"). InnerVolumeSpecName "kube-api-access-wgmfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.937656 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efd09525-8724-4184-9311-f2dd52139a81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efd09525-8724-4184-9311-f2dd52139a81" (UID: "efd09525-8724-4184-9311-f2dd52139a81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.978795 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.980754 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:27:47 crc kubenswrapper[5050]: I0131 05:27:47.984627 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.008511 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd09525-8724-4184-9311-f2dd52139a81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.008553 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd09525-8724-4184-9311-f2dd52139a81-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.008568 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgmfq\" (UniqueName: \"kubernetes.io/projected/efd09525-8724-4184-9311-f2dd52139a81-kube-api-access-wgmfq\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.109387 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bdc621b-09b4-43de-921b-be2322174c79-utilities\") pod \"1bdc621b-09b4-43de-921b-be2322174c79\" (UID: \"1bdc621b-09b4-43de-921b-be2322174c79\") " Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.109457 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmkht\" (UniqueName: \"kubernetes.io/projected/f2a80941-a665-4ff2-8f03-841e88b654cc-kube-api-access-mmkht\") pod \"f2a80941-a665-4ff2-8f03-841e88b654cc\" (UID: \"f2a80941-a665-4ff2-8f03-841e88b654cc\") " Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.109477 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ztzc\" (UniqueName: \"kubernetes.io/projected/1bdc621b-09b4-43de-921b-be2322174c79-kube-api-access-6ztzc\") pod \"1bdc621b-09b4-43de-921b-be2322174c79\" (UID: \"1bdc621b-09b4-43de-921b-be2322174c79\") " Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.109501 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4hxz\" (UniqueName: \"kubernetes.io/projected/29fd7267-f00e-4b58-bdab-55bf2d0c801c-kube-api-access-h4hxz\") pod \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\" (UID: \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\") " Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.109526 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a80941-a665-4ff2-8f03-841e88b654cc-utilities\") pod \"f2a80941-a665-4ff2-8f03-841e88b654cc\" (UID: \"f2a80941-a665-4ff2-8f03-841e88b654cc\") " Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.109548 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29fd7267-f00e-4b58-bdab-55bf2d0c801c-utilities\") pod \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\" (UID: \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\") " Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.109572 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29fd7267-f00e-4b58-bdab-55bf2d0c801c-catalog-content\") pod \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\" (UID: \"29fd7267-f00e-4b58-bdab-55bf2d0c801c\") " Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.109590 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bdc621b-09b4-43de-921b-be2322174c79-catalog-content\") pod \"1bdc621b-09b4-43de-921b-be2322174c79\" (UID: \"1bdc621b-09b4-43de-921b-be2322174c79\") " Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.109606 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a80941-a665-4ff2-8f03-841e88b654cc-catalog-content\") pod \"f2a80941-a665-4ff2-8f03-841e88b654cc\" (UID: \"f2a80941-a665-4ff2-8f03-841e88b654cc\") " Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.110840 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bdc621b-09b4-43de-921b-be2322174c79-utilities" (OuterVolumeSpecName: "utilities") pod "1bdc621b-09b4-43de-921b-be2322174c79" (UID: "1bdc621b-09b4-43de-921b-be2322174c79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.111030 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29fd7267-f00e-4b58-bdab-55bf2d0c801c-utilities" (OuterVolumeSpecName: "utilities") pod "29fd7267-f00e-4b58-bdab-55bf2d0c801c" (UID: "29fd7267-f00e-4b58-bdab-55bf2d0c801c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.111247 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a80941-a665-4ff2-8f03-841e88b654cc-utilities" (OuterVolumeSpecName: "utilities") pod "f2a80941-a665-4ff2-8f03-841e88b654cc" (UID: "f2a80941-a665-4ff2-8f03-841e88b654cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.115426 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a80941-a665-4ff2-8f03-841e88b654cc-kube-api-access-mmkht" (OuterVolumeSpecName: "kube-api-access-mmkht") pod "f2a80941-a665-4ff2-8f03-841e88b654cc" (UID: "f2a80941-a665-4ff2-8f03-841e88b654cc"). InnerVolumeSpecName "kube-api-access-mmkht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.115543 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29fd7267-f00e-4b58-bdab-55bf2d0c801c-kube-api-access-h4hxz" (OuterVolumeSpecName: "kube-api-access-h4hxz") pod "29fd7267-f00e-4b58-bdab-55bf2d0c801c" (UID: "29fd7267-f00e-4b58-bdab-55bf2d0c801c"). InnerVolumeSpecName "kube-api-access-h4hxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.115845 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bdc621b-09b4-43de-921b-be2322174c79-kube-api-access-6ztzc" (OuterVolumeSpecName: "kube-api-access-6ztzc") pod "1bdc621b-09b4-43de-921b-be2322174c79" (UID: "1bdc621b-09b4-43de-921b-be2322174c79"). InnerVolumeSpecName "kube-api-access-6ztzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.160732 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29fd7267-f00e-4b58-bdab-55bf2d0c801c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29fd7267-f00e-4b58-bdab-55bf2d0c801c" (UID: "29fd7267-f00e-4b58-bdab-55bf2d0c801c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.181244 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a80941-a665-4ff2-8f03-841e88b654cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2a80941-a665-4ff2-8f03-841e88b654cc" (UID: "f2a80941-a665-4ff2-8f03-841e88b654cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.211251 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4hxz\" (UniqueName: \"kubernetes.io/projected/29fd7267-f00e-4b58-bdab-55bf2d0c801c-kube-api-access-h4hxz\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.211295 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a80941-a665-4ff2-8f03-841e88b654cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.211306 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29fd7267-f00e-4b58-bdab-55bf2d0c801c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.211315 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29fd7267-f00e-4b58-bdab-55bf2d0c801c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.211323 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a80941-a665-4ff2-8f03-841e88b654cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.211330 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bdc621b-09b4-43de-921b-be2322174c79-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.211339 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmkht\" (UniqueName: \"kubernetes.io/projected/f2a80941-a665-4ff2-8f03-841e88b654cc-kube-api-access-mmkht\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.211347 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ztzc\" (UniqueName: \"kubernetes.io/projected/1bdc621b-09b4-43de-921b-be2322174c79-kube-api-access-6ztzc\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.227033 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bdc621b-09b4-43de-921b-be2322174c79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bdc621b-09b4-43de-921b-be2322174c79" (UID: "1bdc621b-09b4-43de-921b-be2322174c79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.312442 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bdc621b-09b4-43de-921b-be2322174c79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.542032 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnvhs" event={"ID":"29fd7267-f00e-4b58-bdab-55bf2d0c801c","Type":"ContainerDied","Data":"bee6f59ed3676acee985afa4695b4a610d1f11451ee2beea7d5d22c2d5aedf73"} Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.542126 5050 scope.go:117] "RemoveContainer" containerID="8a717fc578b95a9f6518121fda39ad508f76dbcc14a8531d8cc20d5a7770e036" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.542250 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnvhs" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.550626 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" event={"ID":"0511caf5-aa17-47ef-b30c-3ba05ec0b8dc","Type":"ContainerStarted","Data":"7e6f92c02a2ae729818cc1e5ec2c824f3df7c9e182562e76d9ac48fbbbf67c13"} Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.550711 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" event={"ID":"0511caf5-aa17-47ef-b30c-3ba05ec0b8dc","Type":"ContainerStarted","Data":"086c1f693f3c89d7aca90f05d92fbcad79ae6194746a2bc3003d6c5a8adcbe5d"} Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.551291 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.555842 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmfcw" event={"ID":"1bdc621b-09b4-43de-921b-be2322174c79","Type":"ContainerDied","Data":"83cabe1a0a54bc86068b86d1dfdc420d6442cb440d66ef984b92fa61c3485b7b"} Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.555881 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmfcw" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.562270 5050 scope.go:117] "RemoveContainer" containerID="d9a92b1628de778d4f0138f718695a12753aa13bd010169fbf6ada1e82334518" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.563281 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.567492 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdgsp" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.567537 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdgsp" event={"ID":"f2a80941-a665-4ff2-8f03-841e88b654cc","Type":"ContainerDied","Data":"4aaa32bfc18b362fcf86d3ce2fedc10a9087611062f87ea4e7f7074cce04d03c"} Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.582909 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m29pg" event={"ID":"efd09525-8724-4184-9311-f2dd52139a81","Type":"ContainerDied","Data":"ae3ecfec13045ade7b8bcd8ccd0af9b1c876eb394d78eb40951adcdd307c4443"} Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.583005 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m29pg" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.600771 5050 scope.go:117] "RemoveContainer" containerID="d7e42e990addfea469ba8301e391604ba8e1c28e0d658214f83f9a6ea75a3b23" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.610353 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-g9x8x" podStartSLOduration=2.610337673 podStartE2EDuration="2.610337673s" podCreationTimestamp="2026-01-31 05:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:27:48.582936611 +0000 UTC m=+393.632098247" watchObservedRunningTime="2026-01-31 05:27:48.610337673 +0000 UTC m=+393.659499269" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.646928 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnvhs"] Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.659426 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tnvhs"] Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.668783 5050 scope.go:117] "RemoveContainer" containerID="43c87a05da20e71455c8bce95724b93579140af830ce4f57adbaad58c08d725a" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.671590 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zdgsp"] Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.678611 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zdgsp"] Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.686006 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qmfcw"] Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.690857 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qmfcw"] Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.694862 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m29pg"] Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.697041 5050 scope.go:117] "RemoveContainer" containerID="6d3ceb67ef6737fe037fbf1ee7db6ef47975a8d01a750b50207f35671cf706fe" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.700458 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m29pg"] Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.730625 5050 scope.go:117] "RemoveContainer" containerID="700ebf9f5037d09f5829646bf087771efd191bffd04792ce9061cae280f95005" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.746414 5050 scope.go:117] "RemoveContainer" containerID="db5bec45b02d2153a7e8f5d1eb6102de1911bda4b253206ab36ca4f92df33af3" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.766495 5050 scope.go:117] "RemoveContainer" containerID="bf11b6c771e869b3a60307470bbaaa28f8dd6f44ed4ec1cdd0007f8c85121ccc" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.782665 5050 scope.go:117] "RemoveContainer" containerID="42d37073af3f53fcd436d261c86e93640e43a216dd5a8b8cbbbd8d4e35d570c7" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.816069 5050 scope.go:117] "RemoveContainer" containerID="7c0032ec02d6d5ab12f383ca9454e86cd7f6eef2c446af1a1b3d42f9f0079dcb" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.833628 5050 scope.go:117] "RemoveContainer" containerID="2923098640b4747de949fcb609515a97dba52cb5620a36f2f95f75f4c7d6fe47" Jan 31 05:27:48 crc kubenswrapper[5050]: I0131 05:27:48.859996 5050 scope.go:117] "RemoveContainer" containerID="6fd515980d0d6e5ace65369330fdaddc741c14d844482fee966397d0e34ee603" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085123 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gp6l2"] Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085346 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" containerName="marketplace-operator" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085359 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" containerName="marketplace-operator" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085372 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bdc621b-09b4-43de-921b-be2322174c79" containerName="extract-utilities" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085381 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bdc621b-09b4-43de-921b-be2322174c79" containerName="extract-utilities" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085397 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" containerName="extract-utilities" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085405 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" containerName="extract-utilities" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085418 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bdc621b-09b4-43de-921b-be2322174c79" containerName="extract-content" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085426 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bdc621b-09b4-43de-921b-be2322174c79" containerName="extract-content" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085440 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" containerName="extract-content" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085450 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" containerName="extract-content" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085461 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd09525-8724-4184-9311-f2dd52139a81" containerName="extract-content" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085469 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd09525-8724-4184-9311-f2dd52139a81" containerName="extract-content" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085481 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" containerName="registry-server" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085489 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" containerName="registry-server" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085500 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd09525-8724-4184-9311-f2dd52139a81" containerName="extract-utilities" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085508 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd09525-8724-4184-9311-f2dd52139a81" containerName="extract-utilities" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085518 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd09525-8724-4184-9311-f2dd52139a81" containerName="registry-server" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085527 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd09525-8724-4184-9311-f2dd52139a81" containerName="registry-server" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085539 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" containerName="marketplace-operator" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085548 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" containerName="marketplace-operator" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085560 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bdc621b-09b4-43de-921b-be2322174c79" containerName="registry-server" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085569 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bdc621b-09b4-43de-921b-be2322174c79" containerName="registry-server" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085580 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" containerName="registry-server" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085588 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" containerName="registry-server" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085599 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" containerName="extract-utilities" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085607 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" containerName="extract-utilities" Jan 31 05:27:49 crc kubenswrapper[5050]: E0131 05:27:49.085618 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" containerName="extract-content" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085626 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" containerName="extract-content" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085730 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" containerName="registry-server" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085741 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" containerName="registry-server" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085753 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" containerName="marketplace-operator" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085765 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c36ad8-2c55-41d9-8bcc-8accc3501626" containerName="marketplace-operator" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085774 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bdc621b-09b4-43de-921b-be2322174c79" containerName="registry-server" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.085789 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd09525-8724-4184-9311-f2dd52139a81" containerName="registry-server" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.086629 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.088941 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.094721 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gp6l2"] Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.124015 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9djz\" (UniqueName: \"kubernetes.io/projected/5187fc7e-79c1-49e5-8060-aeeed8bd9870-kube-api-access-m9djz\") pod \"certified-operators-gp6l2\" (UID: \"5187fc7e-79c1-49e5-8060-aeeed8bd9870\") " pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.124324 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5187fc7e-79c1-49e5-8060-aeeed8bd9870-catalog-content\") pod \"certified-operators-gp6l2\" (UID: \"5187fc7e-79c1-49e5-8060-aeeed8bd9870\") " pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.124467 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5187fc7e-79c1-49e5-8060-aeeed8bd9870-utilities\") pod \"certified-operators-gp6l2\" (UID: \"5187fc7e-79c1-49e5-8060-aeeed8bd9870\") " pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.225548 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5187fc7e-79c1-49e5-8060-aeeed8bd9870-catalog-content\") pod \"certified-operators-gp6l2\" (UID: \"5187fc7e-79c1-49e5-8060-aeeed8bd9870\") " pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.225839 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5187fc7e-79c1-49e5-8060-aeeed8bd9870-utilities\") pod \"certified-operators-gp6l2\" (UID: \"5187fc7e-79c1-49e5-8060-aeeed8bd9870\") " pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.225881 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9djz\" (UniqueName: \"kubernetes.io/projected/5187fc7e-79c1-49e5-8060-aeeed8bd9870-kube-api-access-m9djz\") pod \"certified-operators-gp6l2\" (UID: \"5187fc7e-79c1-49e5-8060-aeeed8bd9870\") " pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.226227 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5187fc7e-79c1-49e5-8060-aeeed8bd9870-catalog-content\") pod \"certified-operators-gp6l2\" (UID: \"5187fc7e-79c1-49e5-8060-aeeed8bd9870\") " pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.226416 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5187fc7e-79c1-49e5-8060-aeeed8bd9870-utilities\") pod \"certified-operators-gp6l2\" (UID: \"5187fc7e-79c1-49e5-8060-aeeed8bd9870\") " pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.252553 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9djz\" (UniqueName: \"kubernetes.io/projected/5187fc7e-79c1-49e5-8060-aeeed8bd9870-kube-api-access-m9djz\") pod \"certified-operators-gp6l2\" (UID: \"5187fc7e-79c1-49e5-8060-aeeed8bd9870\") " pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.439136 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.744081 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bdc621b-09b4-43de-921b-be2322174c79" path="/var/lib/kubelet/pods/1bdc621b-09b4-43de-921b-be2322174c79/volumes" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.745071 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29fd7267-f00e-4b58-bdab-55bf2d0c801c" path="/var/lib/kubelet/pods/29fd7267-f00e-4b58-bdab-55bf2d0c801c/volumes" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.745716 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efd09525-8724-4184-9311-f2dd52139a81" path="/var/lib/kubelet/pods/efd09525-8724-4184-9311-f2dd52139a81/volumes" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.746851 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a80941-a665-4ff2-8f03-841e88b654cc" path="/var/lib/kubelet/pods/f2a80941-a665-4ff2-8f03-841e88b654cc/volumes" Jan 31 05:27:49 crc kubenswrapper[5050]: I0131 05:27:49.869085 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gp6l2"] Jan 31 05:27:50 crc kubenswrapper[5050]: I0131 05:27:50.602878 5050 generic.go:334] "Generic (PLEG): container finished" podID="5187fc7e-79c1-49e5-8060-aeeed8bd9870" containerID="47d406b9a85719eabcca391fedf49fb91a1fe5e27e6e6781e84303eb943646c6" exitCode=0 Jan 31 05:27:50 crc kubenswrapper[5050]: I0131 05:27:50.603916 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp6l2" event={"ID":"5187fc7e-79c1-49e5-8060-aeeed8bd9870","Type":"ContainerDied","Data":"47d406b9a85719eabcca391fedf49fb91a1fe5e27e6e6781e84303eb943646c6"} Jan 31 05:27:50 crc kubenswrapper[5050]: I0131 05:27:50.603987 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp6l2" event={"ID":"5187fc7e-79c1-49e5-8060-aeeed8bd9870","Type":"ContainerStarted","Data":"1ea717d5c74001ebf9896ee020d3e3fbb28609918dd9859f2aa7c09fe8deb985"} Jan 31 05:27:50 crc kubenswrapper[5050]: I0131 05:27:50.638511 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn"] Jan 31 05:27:50 crc kubenswrapper[5050]: I0131 05:27:50.639166 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" podUID="32bce78f-8a53-4291-83c9-2d92bfe138bf" containerName="route-controller-manager" containerID="cri-o://c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139" gracePeriod=30 Jan 31 05:27:50 crc kubenswrapper[5050]: I0131 05:27:50.872607 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pxdml"] Jan 31 05:27:50 crc kubenswrapper[5050]: I0131 05:27:50.873768 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:27:50 crc kubenswrapper[5050]: I0131 05:27:50.875699 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 05:27:50 crc kubenswrapper[5050]: I0131 05:27:50.895987 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pxdml"] Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.049458 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a24e898-ac43-489c-a204-f817d6fb32a1-utilities\") pod \"redhat-operators-pxdml\" (UID: \"2a24e898-ac43-489c-a204-f817d6fb32a1\") " pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.049538 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a24e898-ac43-489c-a204-f817d6fb32a1-catalog-content\") pod \"redhat-operators-pxdml\" (UID: \"2a24e898-ac43-489c-a204-f817d6fb32a1\") " pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.049561 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngr5p\" (UniqueName: \"kubernetes.io/projected/2a24e898-ac43-489c-a204-f817d6fb32a1-kube-api-access-ngr5p\") pod \"redhat-operators-pxdml\" (UID: \"2a24e898-ac43-489c-a204-f817d6fb32a1\") " pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.151077 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a24e898-ac43-489c-a204-f817d6fb32a1-utilities\") pod \"redhat-operators-pxdml\" (UID: \"2a24e898-ac43-489c-a204-f817d6fb32a1\") " pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.151150 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a24e898-ac43-489c-a204-f817d6fb32a1-catalog-content\") pod \"redhat-operators-pxdml\" (UID: \"2a24e898-ac43-489c-a204-f817d6fb32a1\") " pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.151182 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngr5p\" (UniqueName: \"kubernetes.io/projected/2a24e898-ac43-489c-a204-f817d6fb32a1-kube-api-access-ngr5p\") pod \"redhat-operators-pxdml\" (UID: \"2a24e898-ac43-489c-a204-f817d6fb32a1\") " pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.151905 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a24e898-ac43-489c-a204-f817d6fb32a1-utilities\") pod \"redhat-operators-pxdml\" (UID: \"2a24e898-ac43-489c-a204-f817d6fb32a1\") " pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.152252 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a24e898-ac43-489c-a204-f817d6fb32a1-catalog-content\") pod \"redhat-operators-pxdml\" (UID: \"2a24e898-ac43-489c-a204-f817d6fb32a1\") " pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.168816 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngr5p\" (UniqueName: \"kubernetes.io/projected/2a24e898-ac43-489c-a204-f817d6fb32a1-kube-api-access-ngr5p\") pod \"redhat-operators-pxdml\" (UID: \"2a24e898-ac43-489c-a204-f817d6fb32a1\") " pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.198259 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.272870 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.353494 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bce78f-8a53-4291-83c9-2d92bfe138bf-config\") pod \"32bce78f-8a53-4291-83c9-2d92bfe138bf\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.353545 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8nqf\" (UniqueName: \"kubernetes.io/projected/32bce78f-8a53-4291-83c9-2d92bfe138bf-kube-api-access-f8nqf\") pod \"32bce78f-8a53-4291-83c9-2d92bfe138bf\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.353575 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bce78f-8a53-4291-83c9-2d92bfe138bf-serving-cert\") pod \"32bce78f-8a53-4291-83c9-2d92bfe138bf\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.353633 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32bce78f-8a53-4291-83c9-2d92bfe138bf-client-ca\") pod \"32bce78f-8a53-4291-83c9-2d92bfe138bf\" (UID: \"32bce78f-8a53-4291-83c9-2d92bfe138bf\") " Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.354641 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32bce78f-8a53-4291-83c9-2d92bfe138bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "32bce78f-8a53-4291-83c9-2d92bfe138bf" (UID: "32bce78f-8a53-4291-83c9-2d92bfe138bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.354702 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32bce78f-8a53-4291-83c9-2d92bfe138bf-config" (OuterVolumeSpecName: "config") pod "32bce78f-8a53-4291-83c9-2d92bfe138bf" (UID: "32bce78f-8a53-4291-83c9-2d92bfe138bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.357081 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32bce78f-8a53-4291-83c9-2d92bfe138bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "32bce78f-8a53-4291-83c9-2d92bfe138bf" (UID: "32bce78f-8a53-4291-83c9-2d92bfe138bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.357089 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32bce78f-8a53-4291-83c9-2d92bfe138bf-kube-api-access-f8nqf" (OuterVolumeSpecName: "kube-api-access-f8nqf") pod "32bce78f-8a53-4291-83c9-2d92bfe138bf" (UID: "32bce78f-8a53-4291-83c9-2d92bfe138bf"). InnerVolumeSpecName "kube-api-access-f8nqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.454667 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bce78f-8a53-4291-83c9-2d92bfe138bf-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.454696 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8nqf\" (UniqueName: \"kubernetes.io/projected/32bce78f-8a53-4291-83c9-2d92bfe138bf-kube-api-access-f8nqf\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.454708 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bce78f-8a53-4291-83c9-2d92bfe138bf-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.454723 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32bce78f-8a53-4291-83c9-2d92bfe138bf-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.476795 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4cw5k"] Jan 31 05:27:51 crc kubenswrapper[5050]: E0131 05:27:51.477048 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32bce78f-8a53-4291-83c9-2d92bfe138bf" containerName="route-controller-manager" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.477088 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bce78f-8a53-4291-83c9-2d92bfe138bf" containerName="route-controller-manager" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.477206 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="32bce78f-8a53-4291-83c9-2d92bfe138bf" containerName="route-controller-manager" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.479644 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.483847 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4cw5k"] Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.484502 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.607768 5050 generic.go:334] "Generic (PLEG): container finished" podID="32bce78f-8a53-4291-83c9-2d92bfe138bf" containerID="c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139" exitCode=0 Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.607819 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.607825 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" event={"ID":"32bce78f-8a53-4291-83c9-2d92bfe138bf","Type":"ContainerDied","Data":"c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139"} Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.608196 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn" event={"ID":"32bce78f-8a53-4291-83c9-2d92bfe138bf","Type":"ContainerDied","Data":"08b5774f366ddd4b6c5a68b372cd0c11bf4810b513be168ee704c685837d550d"} Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.608216 5050 scope.go:117] "RemoveContainer" containerID="c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.610500 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp6l2" event={"ID":"5187fc7e-79c1-49e5-8060-aeeed8bd9870","Type":"ContainerStarted","Data":"41210b774b8fa9cef58e759744e13cfc2dbdd72203e9282ec0b7b59b1d564ebf"} Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.622120 5050 scope.go:117] "RemoveContainer" containerID="c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139" Jan 31 05:27:51 crc kubenswrapper[5050]: E0131 05:27:51.623841 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139\": container with ID starting with c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139 not found: ID does not exist" containerID="c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.623882 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139"} err="failed to get container status \"c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139\": rpc error: code = NotFound desc = could not find container \"c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139\": container with ID starting with c2a26f696f479248e2f615e5dc1a9401bd2fbcc34a0e7efc1bb1a8eb9d5e2139 not found: ID does not exist" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.646894 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn"] Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.651226 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b95bb48c6-9fljn"] Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.659126 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5n4c\" (UniqueName: \"kubernetes.io/projected/3a56f313-7ca2-4e38-a80b-6395af5eebde-kube-api-access-c5n4c\") pod \"community-operators-4cw5k\" (UID: \"3a56f313-7ca2-4e38-a80b-6395af5eebde\") " pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.659272 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a56f313-7ca2-4e38-a80b-6395af5eebde-catalog-content\") pod \"community-operators-4cw5k\" (UID: \"3a56f313-7ca2-4e38-a80b-6395af5eebde\") " pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.659303 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a56f313-7ca2-4e38-a80b-6395af5eebde-utilities\") pod \"community-operators-4cw5k\" (UID: \"3a56f313-7ca2-4e38-a80b-6395af5eebde\") " pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.669852 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pxdml"] Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.745001 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32bce78f-8a53-4291-83c9-2d92bfe138bf" path="/var/lib/kubelet/pods/32bce78f-8a53-4291-83c9-2d92bfe138bf/volumes" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.759901 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a56f313-7ca2-4e38-a80b-6395af5eebde-catalog-content\") pod \"community-operators-4cw5k\" (UID: \"3a56f313-7ca2-4e38-a80b-6395af5eebde\") " pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.759934 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a56f313-7ca2-4e38-a80b-6395af5eebde-utilities\") pod \"community-operators-4cw5k\" (UID: \"3a56f313-7ca2-4e38-a80b-6395af5eebde\") " pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.759982 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5n4c\" (UniqueName: \"kubernetes.io/projected/3a56f313-7ca2-4e38-a80b-6395af5eebde-kube-api-access-c5n4c\") pod \"community-operators-4cw5k\" (UID: \"3a56f313-7ca2-4e38-a80b-6395af5eebde\") " pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.760334 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a56f313-7ca2-4e38-a80b-6395af5eebde-catalog-content\") pod \"community-operators-4cw5k\" (UID: \"3a56f313-7ca2-4e38-a80b-6395af5eebde\") " pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.760539 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a56f313-7ca2-4e38-a80b-6395af5eebde-utilities\") pod \"community-operators-4cw5k\" (UID: \"3a56f313-7ca2-4e38-a80b-6395af5eebde\") " pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.777583 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5n4c\" (UniqueName: \"kubernetes.io/projected/3a56f313-7ca2-4e38-a80b-6395af5eebde-kube-api-access-c5n4c\") pod \"community-operators-4cw5k\" (UID: \"3a56f313-7ca2-4e38-a80b-6395af5eebde\") " pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:27:51 crc kubenswrapper[5050]: I0131 05:27:51.798135 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.237369 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs"] Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.242647 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.245571 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.245977 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.246198 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.246345 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.246506 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.246788 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.256028 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs"] Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.267224 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4cw5k"] Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.366072 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f7d254c-717e-405b-9ad2-227f875e8769-client-ca\") pod \"route-controller-manager-5c49c7d9b9-pgdgs\" (UID: \"0f7d254c-717e-405b-9ad2-227f875e8769\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.366154 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f7d254c-717e-405b-9ad2-227f875e8769-config\") pod \"route-controller-manager-5c49c7d9b9-pgdgs\" (UID: \"0f7d254c-717e-405b-9ad2-227f875e8769\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.366182 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndslx\" (UniqueName: \"kubernetes.io/projected/0f7d254c-717e-405b-9ad2-227f875e8769-kube-api-access-ndslx\") pod \"route-controller-manager-5c49c7d9b9-pgdgs\" (UID: \"0f7d254c-717e-405b-9ad2-227f875e8769\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.366205 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f7d254c-717e-405b-9ad2-227f875e8769-serving-cert\") pod \"route-controller-manager-5c49c7d9b9-pgdgs\" (UID: \"0f7d254c-717e-405b-9ad2-227f875e8769\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.467292 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f7d254c-717e-405b-9ad2-227f875e8769-client-ca\") pod \"route-controller-manager-5c49c7d9b9-pgdgs\" (UID: \"0f7d254c-717e-405b-9ad2-227f875e8769\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.467551 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f7d254c-717e-405b-9ad2-227f875e8769-config\") pod \"route-controller-manager-5c49c7d9b9-pgdgs\" (UID: \"0f7d254c-717e-405b-9ad2-227f875e8769\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.467676 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndslx\" (UniqueName: \"kubernetes.io/projected/0f7d254c-717e-405b-9ad2-227f875e8769-kube-api-access-ndslx\") pod \"route-controller-manager-5c49c7d9b9-pgdgs\" (UID: \"0f7d254c-717e-405b-9ad2-227f875e8769\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.467800 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f7d254c-717e-405b-9ad2-227f875e8769-serving-cert\") pod \"route-controller-manager-5c49c7d9b9-pgdgs\" (UID: \"0f7d254c-717e-405b-9ad2-227f875e8769\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.468395 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f7d254c-717e-405b-9ad2-227f875e8769-client-ca\") pod \"route-controller-manager-5c49c7d9b9-pgdgs\" (UID: \"0f7d254c-717e-405b-9ad2-227f875e8769\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.470561 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f7d254c-717e-405b-9ad2-227f875e8769-config\") pod \"route-controller-manager-5c49c7d9b9-pgdgs\" (UID: \"0f7d254c-717e-405b-9ad2-227f875e8769\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.473915 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f7d254c-717e-405b-9ad2-227f875e8769-serving-cert\") pod \"route-controller-manager-5c49c7d9b9-pgdgs\" (UID: \"0f7d254c-717e-405b-9ad2-227f875e8769\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.489070 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndslx\" (UniqueName: \"kubernetes.io/projected/0f7d254c-717e-405b-9ad2-227f875e8769-kube-api-access-ndslx\") pod \"route-controller-manager-5c49c7d9b9-pgdgs\" (UID: \"0f7d254c-717e-405b-9ad2-227f875e8769\") " pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.594715 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.619964 5050 generic.go:334] "Generic (PLEG): container finished" podID="5187fc7e-79c1-49e5-8060-aeeed8bd9870" containerID="41210b774b8fa9cef58e759744e13cfc2dbdd72203e9282ec0b7b59b1d564ebf" exitCode=0 Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.620055 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp6l2" event={"ID":"5187fc7e-79c1-49e5-8060-aeeed8bd9870","Type":"ContainerDied","Data":"41210b774b8fa9cef58e759744e13cfc2dbdd72203e9282ec0b7b59b1d564ebf"} Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.621874 5050 generic.go:334] "Generic (PLEG): container finished" podID="2a24e898-ac43-489c-a204-f817d6fb32a1" containerID="26e5108156309b35fcfe96ec99e554075b2ec3b4bc5041a48e01b8e12cbe946f" exitCode=0 Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.622155 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxdml" event={"ID":"2a24e898-ac43-489c-a204-f817d6fb32a1","Type":"ContainerDied","Data":"26e5108156309b35fcfe96ec99e554075b2ec3b4bc5041a48e01b8e12cbe946f"} Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.625284 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxdml" event={"ID":"2a24e898-ac43-489c-a204-f817d6fb32a1","Type":"ContainerStarted","Data":"5b3345f67488ec12e178227c2dd694b141b2a1da13b088e8b331be9c5fa6e37a"} Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.628265 5050 generic.go:334] "Generic (PLEG): container finished" podID="3a56f313-7ca2-4e38-a80b-6395af5eebde" containerID="d3437986eb479e2cd19d0a2d9e2be39dfaefd934f635b9230400c906a149ee2b" exitCode=0 Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.628360 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cw5k" event={"ID":"3a56f313-7ca2-4e38-a80b-6395af5eebde","Type":"ContainerDied","Data":"d3437986eb479e2cd19d0a2d9e2be39dfaefd934f635b9230400c906a149ee2b"} Jan 31 05:27:52 crc kubenswrapper[5050]: I0131 05:27:52.628392 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cw5k" event={"ID":"3a56f313-7ca2-4e38-a80b-6395af5eebde","Type":"ContainerStarted","Data":"396333ea8f1c2aa31514279544d89e385ed360edde6817d86ad51a9ea1694fc4"} Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.003074 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs"] Jan 31 05:27:53 crc kubenswrapper[5050]: W0131 05:27:53.009591 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f7d254c_717e_405b_9ad2_227f875e8769.slice/crio-aaff3af20f2160424f4425e116f91aeae12cb809181228df6e0c5f7dfe0a477d WatchSource:0}: Error finding container aaff3af20f2160424f4425e116f91aeae12cb809181228df6e0c5f7dfe0a477d: Status 404 returned error can't find the container with id aaff3af20f2160424f4425e116f91aeae12cb809181228df6e0c5f7dfe0a477d Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.278652 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w994t"] Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.280889 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.281828 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f72d45d-bc4c-4a9f-97b4-202d3493d7b4-utilities\") pod \"redhat-marketplace-w994t\" (UID: \"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4\") " pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.281893 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f72d45d-bc4c-4a9f-97b4-202d3493d7b4-catalog-content\") pod \"redhat-marketplace-w994t\" (UID: \"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4\") " pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.281982 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh9ft\" (UniqueName: \"kubernetes.io/projected/9f72d45d-bc4c-4a9f-97b4-202d3493d7b4-kube-api-access-gh9ft\") pod \"redhat-marketplace-w994t\" (UID: \"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4\") " pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.282629 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.287704 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w994t"] Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.383275 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f72d45d-bc4c-4a9f-97b4-202d3493d7b4-utilities\") pod \"redhat-marketplace-w994t\" (UID: \"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4\") " pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.383324 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f72d45d-bc4c-4a9f-97b4-202d3493d7b4-catalog-content\") pod \"redhat-marketplace-w994t\" (UID: \"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4\") " pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.383357 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh9ft\" (UniqueName: \"kubernetes.io/projected/9f72d45d-bc4c-4a9f-97b4-202d3493d7b4-kube-api-access-gh9ft\") pod \"redhat-marketplace-w994t\" (UID: \"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4\") " pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.383788 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f72d45d-bc4c-4a9f-97b4-202d3493d7b4-utilities\") pod \"redhat-marketplace-w994t\" (UID: \"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4\") " pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.383840 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f72d45d-bc4c-4a9f-97b4-202d3493d7b4-catalog-content\") pod \"redhat-marketplace-w994t\" (UID: \"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4\") " pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.407357 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh9ft\" (UniqueName: \"kubernetes.io/projected/9f72d45d-bc4c-4a9f-97b4-202d3493d7b4-kube-api-access-gh9ft\") pod \"redhat-marketplace-w994t\" (UID: \"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4\") " pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.593461 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.638233 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" event={"ID":"0f7d254c-717e-405b-9ad2-227f875e8769","Type":"ContainerStarted","Data":"d27e2c5cb2fd03c93c0a438756765ca833ffd21826b1d30c88ce2dbfd54d9d74"} Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.638683 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" event={"ID":"0f7d254c-717e-405b-9ad2-227f875e8769","Type":"ContainerStarted","Data":"aaff3af20f2160424f4425e116f91aeae12cb809181228df6e0c5f7dfe0a477d"} Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.638721 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.662157 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" podStartSLOduration=3.662128092 podStartE2EDuration="3.662128092s" podCreationTimestamp="2026-01-31 05:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:27:53.652351617 +0000 UTC m=+398.701513213" watchObservedRunningTime="2026-01-31 05:27:53.662128092 +0000 UTC m=+398.711289728" Jan 31 05:27:53 crc kubenswrapper[5050]: I0131 05:27:53.810486 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c49c7d9b9-pgdgs" Jan 31 05:27:55 crc kubenswrapper[5050]: I0131 05:27:55.060250 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w994t"] Jan 31 05:27:55 crc kubenswrapper[5050]: W0131 05:27:55.172939 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f72d45d_bc4c_4a9f_97b4_202d3493d7b4.slice/crio-001a24f04a31d6acaca9eea2c00124a25c7099132d21b3cd464406af4980adf4 WatchSource:0}: Error finding container 001a24f04a31d6acaca9eea2c00124a25c7099132d21b3cd464406af4980adf4: Status 404 returned error can't find the container with id 001a24f04a31d6acaca9eea2c00124a25c7099132d21b3cd464406af4980adf4 Jan 31 05:27:55 crc kubenswrapper[5050]: I0131 05:27:55.651593 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w994t" event={"ID":"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4","Type":"ContainerStarted","Data":"001a24f04a31d6acaca9eea2c00124a25c7099132d21b3cd464406af4980adf4"} Jan 31 05:27:56 crc kubenswrapper[5050]: I0131 05:27:56.659773 5050 generic.go:334] "Generic (PLEG): container finished" podID="3a56f313-7ca2-4e38-a80b-6395af5eebde" containerID="679de2a2668b3736898e4e2ef45a9e9f52b71fb47e92680d3150c7ba66410648" exitCode=0 Jan 31 05:27:56 crc kubenswrapper[5050]: I0131 05:27:56.659873 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cw5k" event={"ID":"3a56f313-7ca2-4e38-a80b-6395af5eebde","Type":"ContainerDied","Data":"679de2a2668b3736898e4e2ef45a9e9f52b71fb47e92680d3150c7ba66410648"} Jan 31 05:27:56 crc kubenswrapper[5050]: I0131 05:27:56.661773 5050 generic.go:334] "Generic (PLEG): container finished" podID="9f72d45d-bc4c-4a9f-97b4-202d3493d7b4" containerID="3339e0c75cf65250e16341ac8ceb23b200c76a97c31d0588216828ce3bf5c298" exitCode=0 Jan 31 05:27:56 crc kubenswrapper[5050]: I0131 05:27:56.661889 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w994t" event={"ID":"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4","Type":"ContainerDied","Data":"3339e0c75cf65250e16341ac8ceb23b200c76a97c31d0588216828ce3bf5c298"} Jan 31 05:27:56 crc kubenswrapper[5050]: I0131 05:27:56.665026 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp6l2" event={"ID":"5187fc7e-79c1-49e5-8060-aeeed8bd9870","Type":"ContainerStarted","Data":"d8d5aa3e918f87e8c4531c164b90b293a9ef33e4786a07c3c8ae201345a722f0"} Jan 31 05:27:56 crc kubenswrapper[5050]: I0131 05:27:56.666685 5050 generic.go:334] "Generic (PLEG): container finished" podID="2a24e898-ac43-489c-a204-f817d6fb32a1" containerID="91c02c68770234dcfccf5f0ac9e5de8733d899be449e22a5a8ba67ba1c5f1711" exitCode=0 Jan 31 05:27:56 crc kubenswrapper[5050]: I0131 05:27:56.666723 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxdml" event={"ID":"2a24e898-ac43-489c-a204-f817d6fb32a1","Type":"ContainerDied","Data":"91c02c68770234dcfccf5f0ac9e5de8733d899be449e22a5a8ba67ba1c5f1711"} Jan 31 05:27:56 crc kubenswrapper[5050]: I0131 05:27:56.700745 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gp6l2" podStartSLOduration=3.8229307820000002 podStartE2EDuration="7.700724108s" podCreationTimestamp="2026-01-31 05:27:49 +0000 UTC" firstStartedPulling="2026-01-31 05:27:50.60518251 +0000 UTC m=+395.654344106" lastFinishedPulling="2026-01-31 05:27:54.482975796 +0000 UTC m=+399.532137432" observedRunningTime="2026-01-31 05:27:56.697078644 +0000 UTC m=+401.746240260" watchObservedRunningTime="2026-01-31 05:27:56.700724108 +0000 UTC m=+401.749885704" Jan 31 05:27:57 crc kubenswrapper[5050]: I0131 05:27:57.685208 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxdml" event={"ID":"2a24e898-ac43-489c-a204-f817d6fb32a1","Type":"ContainerStarted","Data":"568cc07803479f96724e839db40166e45762c99b6225eba6d929509fa46a5022"} Jan 31 05:27:57 crc kubenswrapper[5050]: I0131 05:27:57.705606 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pxdml" podStartSLOduration=3.002915377 podStartE2EDuration="7.705585241s" podCreationTimestamp="2026-01-31 05:27:50 +0000 UTC" firstStartedPulling="2026-01-31 05:27:52.62713627 +0000 UTC m=+397.676297866" lastFinishedPulling="2026-01-31 05:27:57.329806134 +0000 UTC m=+402.378967730" observedRunningTime="2026-01-31 05:27:57.700659682 +0000 UTC m=+402.749821278" watchObservedRunningTime="2026-01-31 05:27:57.705585241 +0000 UTC m=+402.754746837" Jan 31 05:27:58 crc kubenswrapper[5050]: I0131 05:27:58.691359 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cw5k" event={"ID":"3a56f313-7ca2-4e38-a80b-6395af5eebde","Type":"ContainerStarted","Data":"daf1c28b8c46587dce2faaecca9fe6e6ea59d89577788e7dea04e339b642318d"} Jan 31 05:27:58 crc kubenswrapper[5050]: I0131 05:27:58.692852 5050 generic.go:334] "Generic (PLEG): container finished" podID="9f72d45d-bc4c-4a9f-97b4-202d3493d7b4" containerID="47b8df834e4d39bf929c368f4ba97af982f03629de3cd486661d093fd2d2ffe1" exitCode=0 Jan 31 05:27:58 crc kubenswrapper[5050]: I0131 05:27:58.692907 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w994t" event={"ID":"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4","Type":"ContainerDied","Data":"47b8df834e4d39bf929c368f4ba97af982f03629de3cd486661d093fd2d2ffe1"} Jan 31 05:27:58 crc kubenswrapper[5050]: I0131 05:27:58.711218 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4cw5k" podStartSLOduration=2.646430731 podStartE2EDuration="7.711111073s" podCreationTimestamp="2026-01-31 05:27:51 +0000 UTC" firstStartedPulling="2026-01-31 05:27:52.630476935 +0000 UTC m=+397.679638571" lastFinishedPulling="2026-01-31 05:27:57.695157317 +0000 UTC m=+402.744318913" observedRunningTime="2026-01-31 05:27:58.707030598 +0000 UTC m=+403.756192194" watchObservedRunningTime="2026-01-31 05:27:58.711111073 +0000 UTC m=+403.760272669" Jan 31 05:27:59 crc kubenswrapper[5050]: I0131 05:27:59.440096 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:59 crc kubenswrapper[5050]: I0131 05:27:59.440140 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:59 crc kubenswrapper[5050]: I0131 05:27:59.502436 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:27:59 crc kubenswrapper[5050]: I0131 05:27:59.698658 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w994t" event={"ID":"9f72d45d-bc4c-4a9f-97b4-202d3493d7b4","Type":"ContainerStarted","Data":"ec1c6244f63645c21d05fe57a65e24ecd5ef4e5c4d5110b37940f6f0f1ac1c6c"} Jan 31 05:27:59 crc kubenswrapper[5050]: I0131 05:27:59.717732 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w994t" podStartSLOduration=4.005591669 podStartE2EDuration="6.717718905s" podCreationTimestamp="2026-01-31 05:27:53 +0000 UTC" firstStartedPulling="2026-01-31 05:27:56.671342161 +0000 UTC m=+401.720503757" lastFinishedPulling="2026-01-31 05:27:59.383469397 +0000 UTC m=+404.432630993" observedRunningTime="2026-01-31 05:27:59.715934464 +0000 UTC m=+404.765096060" watchObservedRunningTime="2026-01-31 05:27:59.717718905 +0000 UTC m=+404.766880501" Jan 31 05:28:01 crc kubenswrapper[5050]: I0131 05:28:01.199181 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:28:01 crc kubenswrapper[5050]: I0131 05:28:01.199532 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:28:01 crc kubenswrapper[5050]: I0131 05:28:01.800094 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:28:01 crc kubenswrapper[5050]: I0131 05:28:01.800527 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:28:01 crc kubenswrapper[5050]: I0131 05:28:01.866745 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:28:02 crc kubenswrapper[5050]: I0131 05:28:02.247419 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pxdml" podUID="2a24e898-ac43-489c-a204-f817d6fb32a1" containerName="registry-server" probeResult="failure" output=< Jan 31 05:28:02 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 05:28:02 crc kubenswrapper[5050]: > Jan 31 05:28:02 crc kubenswrapper[5050]: I0131 05:28:02.756300 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4cw5k" Jan 31 05:28:02 crc kubenswrapper[5050]: I0131 05:28:02.981181 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" podUID="82582675-89e4-4783-84df-ea11774c62aa" containerName="registry" containerID="cri-o://85d170fe087a0e766b6377cee8de77dcbc58bdc7c7c7c5e7671e4a3c2c99dd32" gracePeriod=30 Jan 31 05:28:03 crc kubenswrapper[5050]: I0131 05:28:03.594630 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:28:03 crc kubenswrapper[5050]: I0131 05:28:03.594704 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:28:03 crc kubenswrapper[5050]: I0131 05:28:03.651615 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:28:04 crc kubenswrapper[5050]: I0131 05:28:04.595597 5050 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-8mvp9 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" start-of-body= Jan 31 05:28:04 crc kubenswrapper[5050]: I0131 05:28:04.596112 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" podUID="82582675-89e4-4783-84df-ea11774c62aa" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" Jan 31 05:28:04 crc kubenswrapper[5050]: I0131 05:28:04.723528 5050 generic.go:334] "Generic (PLEG): container finished" podID="82582675-89e4-4783-84df-ea11774c62aa" containerID="85d170fe087a0e766b6377cee8de77dcbc58bdc7c7c7c5e7671e4a3c2c99dd32" exitCode=0 Jan 31 05:28:04 crc kubenswrapper[5050]: I0131 05:28:04.723624 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" event={"ID":"82582675-89e4-4783-84df-ea11774c62aa","Type":"ContainerDied","Data":"85d170fe087a0e766b6377cee8de77dcbc58bdc7c7c7c5e7671e4a3c2c99dd32"} Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.300751 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.467985 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82582675-89e4-4783-84df-ea11774c62aa-ca-trust-extracted\") pod \"82582675-89e4-4783-84df-ea11774c62aa\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.468066 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82582675-89e4-4783-84df-ea11774c62aa-trusted-ca\") pod \"82582675-89e4-4783-84df-ea11774c62aa\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.468867 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82582675-89e4-4783-84df-ea11774c62aa-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "82582675-89e4-4783-84df-ea11774c62aa" (UID: "82582675-89e4-4783-84df-ea11774c62aa"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.468994 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82582675-89e4-4783-84df-ea11774c62aa-installation-pull-secrets\") pod \"82582675-89e4-4783-84df-ea11774c62aa\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.469084 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-registry-tls\") pod \"82582675-89e4-4783-84df-ea11774c62aa\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.469113 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82582675-89e4-4783-84df-ea11774c62aa-registry-certificates\") pod \"82582675-89e4-4783-84df-ea11774c62aa\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.469226 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"82582675-89e4-4783-84df-ea11774c62aa\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.469276 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-bound-sa-token\") pod \"82582675-89e4-4783-84df-ea11774c62aa\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.469309 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x74c\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-kube-api-access-7x74c\") pod \"82582675-89e4-4783-84df-ea11774c62aa\" (UID: \"82582675-89e4-4783-84df-ea11774c62aa\") " Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.469532 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82582675-89e4-4783-84df-ea11774c62aa-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.470685 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82582675-89e4-4783-84df-ea11774c62aa-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "82582675-89e4-4783-84df-ea11774c62aa" (UID: "82582675-89e4-4783-84df-ea11774c62aa"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.476310 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82582675-89e4-4783-84df-ea11774c62aa-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "82582675-89e4-4783-84df-ea11774c62aa" (UID: "82582675-89e4-4783-84df-ea11774c62aa"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.477750 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "82582675-89e4-4783-84df-ea11774c62aa" (UID: "82582675-89e4-4783-84df-ea11774c62aa"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.479377 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-kube-api-access-7x74c" (OuterVolumeSpecName: "kube-api-access-7x74c") pod "82582675-89e4-4783-84df-ea11774c62aa" (UID: "82582675-89e4-4783-84df-ea11774c62aa"). InnerVolumeSpecName "kube-api-access-7x74c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.480505 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "82582675-89e4-4783-84df-ea11774c62aa" (UID: "82582675-89e4-4783-84df-ea11774c62aa"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.483683 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "82582675-89e4-4783-84df-ea11774c62aa" (UID: "82582675-89e4-4783-84df-ea11774c62aa"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.489729 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82582675-89e4-4783-84df-ea11774c62aa-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "82582675-89e4-4783-84df-ea11774c62aa" (UID: "82582675-89e4-4783-84df-ea11774c62aa"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.570805 5050 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82582675-89e4-4783-84df-ea11774c62aa-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.570847 5050 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.570865 5050 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82582675-89e4-4783-84df-ea11774c62aa-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.570882 5050 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.570899 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x74c\" (UniqueName: \"kubernetes.io/projected/82582675-89e4-4783-84df-ea11774c62aa-kube-api-access-7x74c\") on node \"crc\" DevicePath \"\"" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.570915 5050 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82582675-89e4-4783-84df-ea11774c62aa-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.729634 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" event={"ID":"82582675-89e4-4783-84df-ea11774c62aa","Type":"ContainerDied","Data":"f51d4b29ad21bf3969fbfa49147f5ed5deebf0cb63aeedcaf6f0709df8b01fed"} Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.729684 5050 scope.go:117] "RemoveContainer" containerID="85d170fe087a0e766b6377cee8de77dcbc58bdc7c7c7c5e7671e4a3c2c99dd32" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.729785 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8mvp9" Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.768668 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8mvp9"] Jan 31 05:28:05 crc kubenswrapper[5050]: I0131 05:28:05.773170 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8mvp9"] Jan 31 05:28:07 crc kubenswrapper[5050]: I0131 05:28:07.744781 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82582675-89e4-4783-84df-ea11774c62aa" path="/var/lib/kubelet/pods/82582675-89e4-4783-84df-ea11774c62aa/volumes" Jan 31 05:28:09 crc kubenswrapper[5050]: I0131 05:28:09.017805 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:28:09 crc kubenswrapper[5050]: I0131 05:28:09.017886 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:28:09 crc kubenswrapper[5050]: I0131 05:28:09.017945 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:28:09 crc kubenswrapper[5050]: I0131 05:28:09.019915 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f169ed087ec5dc88ea90cd249e2934f2701dee31413e8924bdbf46d544a5a4f8"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:28:09 crc kubenswrapper[5050]: I0131 05:28:09.020059 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://f169ed087ec5dc88ea90cd249e2934f2701dee31413e8924bdbf46d544a5a4f8" gracePeriod=600 Jan 31 05:28:09 crc kubenswrapper[5050]: I0131 05:28:09.507210 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gp6l2" Jan 31 05:28:11 crc kubenswrapper[5050]: I0131 05:28:11.247614 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:28:11 crc kubenswrapper[5050]: I0131 05:28:11.314083 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pxdml" Jan 31 05:28:13 crc kubenswrapper[5050]: I0131 05:28:13.666734 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w994t" Jan 31 05:28:14 crc kubenswrapper[5050]: I0131 05:28:14.896575 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="f169ed087ec5dc88ea90cd249e2934f2701dee31413e8924bdbf46d544a5a4f8" exitCode=0 Jan 31 05:28:14 crc kubenswrapper[5050]: I0131 05:28:14.896641 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"f169ed087ec5dc88ea90cd249e2934f2701dee31413e8924bdbf46d544a5a4f8"} Jan 31 05:28:14 crc kubenswrapper[5050]: I0131 05:28:14.896938 5050 scope.go:117] "RemoveContainer" containerID="d74b77d7797635c7969c7958999ee3d37e32efde61fb0d19b783100862d21a89" Jan 31 05:28:15 crc kubenswrapper[5050]: I0131 05:28:15.902191 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"15581d05502b7653eca57913422553640138d30ed6c6d91517e5fec43402b57c"} Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.221878 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj"] Jan 31 05:30:00 crc kubenswrapper[5050]: E0131 05:30:00.223223 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82582675-89e4-4783-84df-ea11774c62aa" containerName="registry" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.223251 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="82582675-89e4-4783-84df-ea11774c62aa" containerName="registry" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.223597 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="82582675-89e4-4783-84df-ea11774c62aa" containerName="registry" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.224930 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.232579 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.232714 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.233569 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj"] Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.411264 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/547b8fdc-88c9-450a-9938-c17102596558-secret-volume\") pod \"collect-profiles-29497290-mbzhj\" (UID: \"547b8fdc-88c9-450a-9938-c17102596558\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.411486 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/547b8fdc-88c9-450a-9938-c17102596558-config-volume\") pod \"collect-profiles-29497290-mbzhj\" (UID: \"547b8fdc-88c9-450a-9938-c17102596558\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.411641 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlp4s\" (UniqueName: \"kubernetes.io/projected/547b8fdc-88c9-450a-9938-c17102596558-kube-api-access-wlp4s\") pod \"collect-profiles-29497290-mbzhj\" (UID: \"547b8fdc-88c9-450a-9938-c17102596558\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.512739 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/547b8fdc-88c9-450a-9938-c17102596558-secret-volume\") pod \"collect-profiles-29497290-mbzhj\" (UID: \"547b8fdc-88c9-450a-9938-c17102596558\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.512992 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/547b8fdc-88c9-450a-9938-c17102596558-config-volume\") pod \"collect-profiles-29497290-mbzhj\" (UID: \"547b8fdc-88c9-450a-9938-c17102596558\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.513073 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlp4s\" (UniqueName: \"kubernetes.io/projected/547b8fdc-88c9-450a-9938-c17102596558-kube-api-access-wlp4s\") pod \"collect-profiles-29497290-mbzhj\" (UID: \"547b8fdc-88c9-450a-9938-c17102596558\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.514917 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/547b8fdc-88c9-450a-9938-c17102596558-config-volume\") pod \"collect-profiles-29497290-mbzhj\" (UID: \"547b8fdc-88c9-450a-9938-c17102596558\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.519904 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/547b8fdc-88c9-450a-9938-c17102596558-secret-volume\") pod \"collect-profiles-29497290-mbzhj\" (UID: \"547b8fdc-88c9-450a-9938-c17102596558\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.544214 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlp4s\" (UniqueName: \"kubernetes.io/projected/547b8fdc-88c9-450a-9938-c17102596558-kube-api-access-wlp4s\") pod \"collect-profiles-29497290-mbzhj\" (UID: \"547b8fdc-88c9-450a-9938-c17102596558\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.556464 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:00 crc kubenswrapper[5050]: I0131 05:30:00.991776 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj"] Jan 31 05:30:01 crc kubenswrapper[5050]: I0131 05:30:01.596074 5050 generic.go:334] "Generic (PLEG): container finished" podID="547b8fdc-88c9-450a-9938-c17102596558" containerID="536423c698e5681965e0298a31d7b6268d7e4eb34a4a91e84e8f54d9c00aff3d" exitCode=0 Jan 31 05:30:01 crc kubenswrapper[5050]: I0131 05:30:01.596170 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" event={"ID":"547b8fdc-88c9-450a-9938-c17102596558","Type":"ContainerDied","Data":"536423c698e5681965e0298a31d7b6268d7e4eb34a4a91e84e8f54d9c00aff3d"} Jan 31 05:30:01 crc kubenswrapper[5050]: I0131 05:30:01.596565 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" event={"ID":"547b8fdc-88c9-450a-9938-c17102596558","Type":"ContainerStarted","Data":"1c859e17e3109b10b6dfa2be3318cee23c5b7938db40de32c6ef626280a9ef14"} Jan 31 05:30:02 crc kubenswrapper[5050]: I0131 05:30:02.833124 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:02 crc kubenswrapper[5050]: I0131 05:30:02.948663 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlp4s\" (UniqueName: \"kubernetes.io/projected/547b8fdc-88c9-450a-9938-c17102596558-kube-api-access-wlp4s\") pod \"547b8fdc-88c9-450a-9938-c17102596558\" (UID: \"547b8fdc-88c9-450a-9938-c17102596558\") " Jan 31 05:30:02 crc kubenswrapper[5050]: I0131 05:30:02.948766 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/547b8fdc-88c9-450a-9938-c17102596558-secret-volume\") pod \"547b8fdc-88c9-450a-9938-c17102596558\" (UID: \"547b8fdc-88c9-450a-9938-c17102596558\") " Jan 31 05:30:02 crc kubenswrapper[5050]: I0131 05:30:02.948980 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/547b8fdc-88c9-450a-9938-c17102596558-config-volume\") pod \"547b8fdc-88c9-450a-9938-c17102596558\" (UID: \"547b8fdc-88c9-450a-9938-c17102596558\") " Jan 31 05:30:02 crc kubenswrapper[5050]: I0131 05:30:02.950756 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547b8fdc-88c9-450a-9938-c17102596558-config-volume" (OuterVolumeSpecName: "config-volume") pod "547b8fdc-88c9-450a-9938-c17102596558" (UID: "547b8fdc-88c9-450a-9938-c17102596558"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:30:02 crc kubenswrapper[5050]: I0131 05:30:02.955646 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547b8fdc-88c9-450a-9938-c17102596558-kube-api-access-wlp4s" (OuterVolumeSpecName: "kube-api-access-wlp4s") pod "547b8fdc-88c9-450a-9938-c17102596558" (UID: "547b8fdc-88c9-450a-9938-c17102596558"). InnerVolumeSpecName "kube-api-access-wlp4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:30:02 crc kubenswrapper[5050]: I0131 05:30:02.956756 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/547b8fdc-88c9-450a-9938-c17102596558-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "547b8fdc-88c9-450a-9938-c17102596558" (UID: "547b8fdc-88c9-450a-9938-c17102596558"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:30:03 crc kubenswrapper[5050]: I0131 05:30:03.049918 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/547b8fdc-88c9-450a-9938-c17102596558-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:30:03 crc kubenswrapper[5050]: I0131 05:30:03.049998 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlp4s\" (UniqueName: \"kubernetes.io/projected/547b8fdc-88c9-450a-9938-c17102596558-kube-api-access-wlp4s\") on node \"crc\" DevicePath \"\"" Jan 31 05:30:03 crc kubenswrapper[5050]: I0131 05:30:03.050012 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/547b8fdc-88c9-450a-9938-c17102596558-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:30:03 crc kubenswrapper[5050]: I0131 05:30:03.614102 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" event={"ID":"547b8fdc-88c9-450a-9938-c17102596558","Type":"ContainerDied","Data":"1c859e17e3109b10b6dfa2be3318cee23c5b7938db40de32c6ef626280a9ef14"} Jan 31 05:30:03 crc kubenswrapper[5050]: I0131 05:30:03.614153 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c859e17e3109b10b6dfa2be3318cee23c5b7938db40de32c6ef626280a9ef14" Jan 31 05:30:03 crc kubenswrapper[5050]: I0131 05:30:03.614192 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj" Jan 31 05:30:39 crc kubenswrapper[5050]: I0131 05:30:39.018251 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:30:39 crc kubenswrapper[5050]: I0131 05:30:39.018913 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:31:09 crc kubenswrapper[5050]: I0131 05:31:09.017707 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:31:09 crc kubenswrapper[5050]: I0131 05:31:09.018620 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:31:39 crc kubenswrapper[5050]: I0131 05:31:39.017683 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:31:39 crc kubenswrapper[5050]: I0131 05:31:39.018633 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:31:39 crc kubenswrapper[5050]: I0131 05:31:39.018705 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:31:39 crc kubenswrapper[5050]: I0131 05:31:39.019734 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15581d05502b7653eca57913422553640138d30ed6c6d91517e5fec43402b57c"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:31:39 crc kubenswrapper[5050]: I0131 05:31:39.019813 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://15581d05502b7653eca57913422553640138d30ed6c6d91517e5fec43402b57c" gracePeriod=600 Jan 31 05:31:39 crc kubenswrapper[5050]: I0131 05:31:39.249912 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="15581d05502b7653eca57913422553640138d30ed6c6d91517e5fec43402b57c" exitCode=0 Jan 31 05:31:39 crc kubenswrapper[5050]: I0131 05:31:39.250009 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"15581d05502b7653eca57913422553640138d30ed6c6d91517e5fec43402b57c"} Jan 31 05:31:39 crc kubenswrapper[5050]: I0131 05:31:39.251212 5050 scope.go:117] "RemoveContainer" containerID="f169ed087ec5dc88ea90cd249e2934f2701dee31413e8924bdbf46d544a5a4f8" Jan 31 05:31:40 crc kubenswrapper[5050]: I0131 05:31:40.260894 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"8fda1476157f97a2d389aaeaa03f696c709d711388e30f77ab369ecc733af733"} Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.274119 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-66tc2"] Jan 31 05:33:31 crc kubenswrapper[5050]: E0131 05:33:31.274774 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547b8fdc-88c9-450a-9938-c17102596558" containerName="collect-profiles" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.274787 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="547b8fdc-88c9-450a-9938-c17102596558" containerName="collect-profiles" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.274885 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="547b8fdc-88c9-450a-9938-c17102596558" containerName="collect-profiles" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.275317 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-66tc2" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.279538 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.280348 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.280721 5050 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-kx725" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.298999 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-z8ctn"] Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.299815 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8ctn" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.304471 5050 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-7d22r" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.313183 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-z8ctn"] Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.318090 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-m5rsc"] Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.318657 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-m5rsc" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.320520 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9srq4\" (UniqueName: \"kubernetes.io/projected/123237fa-3f5a-4153-88c6-0f0efc20738d-kube-api-access-9srq4\") pod \"cert-manager-858654f9db-66tc2\" (UID: \"123237fa-3f5a-4153-88c6-0f0efc20738d\") " pod="cert-manager/cert-manager-858654f9db-66tc2" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.320562 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr8vt\" (UniqueName: \"kubernetes.io/projected/e248a1d5-f588-44e2-ad44-87016c519de8-kube-api-access-xr8vt\") pod \"cert-manager-cainjector-cf98fcc89-z8ctn\" (UID: \"e248a1d5-f588-44e2-ad44-87016c519de8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8ctn" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.320606 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwn9z\" (UniqueName: \"kubernetes.io/projected/8d72a638-d293-4df5-b8c0-dcf876f1fa3d-kube-api-access-fwn9z\") pod \"cert-manager-webhook-687f57d79b-m5rsc\" (UID: \"8d72a638-d293-4df5-b8c0-dcf876f1fa3d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-m5rsc" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.320739 5050 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jcjtb" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.324496 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-66tc2"] Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.328735 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-m5rsc"] Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.421586 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8vt\" (UniqueName: \"kubernetes.io/projected/e248a1d5-f588-44e2-ad44-87016c519de8-kube-api-access-xr8vt\") pod \"cert-manager-cainjector-cf98fcc89-z8ctn\" (UID: \"e248a1d5-f588-44e2-ad44-87016c519de8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8ctn" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.421668 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwn9z\" (UniqueName: \"kubernetes.io/projected/8d72a638-d293-4df5-b8c0-dcf876f1fa3d-kube-api-access-fwn9z\") pod \"cert-manager-webhook-687f57d79b-m5rsc\" (UID: \"8d72a638-d293-4df5-b8c0-dcf876f1fa3d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-m5rsc" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.421706 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9srq4\" (UniqueName: \"kubernetes.io/projected/123237fa-3f5a-4153-88c6-0f0efc20738d-kube-api-access-9srq4\") pod \"cert-manager-858654f9db-66tc2\" (UID: \"123237fa-3f5a-4153-88c6-0f0efc20738d\") " pod="cert-manager/cert-manager-858654f9db-66tc2" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.445859 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr8vt\" (UniqueName: \"kubernetes.io/projected/e248a1d5-f588-44e2-ad44-87016c519de8-kube-api-access-xr8vt\") pod \"cert-manager-cainjector-cf98fcc89-z8ctn\" (UID: \"e248a1d5-f588-44e2-ad44-87016c519de8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8ctn" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.446516 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwn9z\" (UniqueName: \"kubernetes.io/projected/8d72a638-d293-4df5-b8c0-dcf876f1fa3d-kube-api-access-fwn9z\") pod \"cert-manager-webhook-687f57d79b-m5rsc\" (UID: \"8d72a638-d293-4df5-b8c0-dcf876f1fa3d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-m5rsc" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.447160 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9srq4\" (UniqueName: \"kubernetes.io/projected/123237fa-3f5a-4153-88c6-0f0efc20738d-kube-api-access-9srq4\") pod \"cert-manager-858654f9db-66tc2\" (UID: \"123237fa-3f5a-4153-88c6-0f0efc20738d\") " pod="cert-manager/cert-manager-858654f9db-66tc2" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.597281 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-66tc2" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.612977 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8ctn" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.633945 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-m5rsc" Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.839204 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-66tc2"] Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.855090 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.903759 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-z8ctn"] Jan 31 05:33:31 crc kubenswrapper[5050]: W0131 05:33:31.910158 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode248a1d5_f588_44e2_ad44_87016c519de8.slice/crio-65c3cde1b371a5b41f9c380550e4076cc9e22586b25d0b5a1d9f7a48f8c251d1 WatchSource:0}: Error finding container 65c3cde1b371a5b41f9c380550e4076cc9e22586b25d0b5a1d9f7a48f8c251d1: Status 404 returned error can't find the container with id 65c3cde1b371a5b41f9c380550e4076cc9e22586b25d0b5a1d9f7a48f8c251d1 Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.948838 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-m5rsc"] Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.996409 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-m5rsc" event={"ID":"8d72a638-d293-4df5-b8c0-dcf876f1fa3d","Type":"ContainerStarted","Data":"f05a7c2801d93292c1197a0bc332e53352a4593243d423452314b0ae9bbe0d60"} Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.997461 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-66tc2" event={"ID":"123237fa-3f5a-4153-88c6-0f0efc20738d","Type":"ContainerStarted","Data":"a749d7f3d72014a3d56b5b8524327acf73989b8a745392d4d71c838cf6493949"} Jan 31 05:33:31 crc kubenswrapper[5050]: I0131 05:33:31.998297 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8ctn" event={"ID":"e248a1d5-f588-44e2-ad44-87016c519de8","Type":"ContainerStarted","Data":"65c3cde1b371a5b41f9c380550e4076cc9e22586b25d0b5a1d9f7a48f8c251d1"} Jan 31 05:33:37 crc kubenswrapper[5050]: I0131 05:33:37.040229 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-66tc2" event={"ID":"123237fa-3f5a-4153-88c6-0f0efc20738d","Type":"ContainerStarted","Data":"5043adfe4b8704c77bdd7f5fa9ce6020eaa4244f3827f23812af69656cfdcc24"} Jan 31 05:33:37 crc kubenswrapper[5050]: I0131 05:33:37.042862 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8ctn" event={"ID":"e248a1d5-f588-44e2-ad44-87016c519de8","Type":"ContainerStarted","Data":"34058da9500114d378b0abd26a8c4e029bfd766eb2375b24011b20d0432d50f5"} Jan 31 05:33:37 crc kubenswrapper[5050]: I0131 05:33:37.043304 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-m5rsc" Jan 31 05:33:37 crc kubenswrapper[5050]: I0131 05:33:37.043572 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-m5rsc" event={"ID":"8d72a638-d293-4df5-b8c0-dcf876f1fa3d","Type":"ContainerStarted","Data":"82d1c04b3dee5b225ea3cdcb8dc7f362a743fedfbfe8f569f8b0c21c0072a514"} Jan 31 05:33:37 crc kubenswrapper[5050]: I0131 05:33:37.055223 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-66tc2" podStartSLOduration=1.408818503 podStartE2EDuration="6.055210149s" podCreationTimestamp="2026-01-31 05:33:31 +0000 UTC" firstStartedPulling="2026-01-31 05:33:31.854870681 +0000 UTC m=+736.904032277" lastFinishedPulling="2026-01-31 05:33:36.501262287 +0000 UTC m=+741.550423923" observedRunningTime="2026-01-31 05:33:37.052258728 +0000 UTC m=+742.101420384" watchObservedRunningTime="2026-01-31 05:33:37.055210149 +0000 UTC m=+742.104371745" Jan 31 05:33:37 crc kubenswrapper[5050]: I0131 05:33:37.075308 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8ctn" podStartSLOduration=1.478480692 podStartE2EDuration="6.075281372s" podCreationTimestamp="2026-01-31 05:33:31 +0000 UTC" firstStartedPulling="2026-01-31 05:33:31.913199649 +0000 UTC m=+736.962361245" lastFinishedPulling="2026-01-31 05:33:36.510000289 +0000 UTC m=+741.559161925" observedRunningTime="2026-01-31 05:33:37.069239806 +0000 UTC m=+742.118401462" watchObservedRunningTime="2026-01-31 05:33:37.075281372 +0000 UTC m=+742.124442978" Jan 31 05:33:37 crc kubenswrapper[5050]: I0131 05:33:37.092372 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-m5rsc" podStartSLOduration=1.539044133 podStartE2EDuration="6.092355654s" podCreationTimestamp="2026-01-31 05:33:31 +0000 UTC" firstStartedPulling="2026-01-31 05:33:31.956189894 +0000 UTC m=+737.005351510" lastFinishedPulling="2026-01-31 05:33:36.509501425 +0000 UTC m=+741.558663031" observedRunningTime="2026-01-31 05:33:37.089209377 +0000 UTC m=+742.138371013" watchObservedRunningTime="2026-01-31 05:33:37.092355654 +0000 UTC m=+742.141517260" Jan 31 05:33:39 crc kubenswrapper[5050]: I0131 05:33:39.018370 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:33:39 crc kubenswrapper[5050]: I0131 05:33:39.018743 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:33:40 crc kubenswrapper[5050]: I0131 05:33:40.925512 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8hx4t"] Jan 31 05:33:40 crc kubenswrapper[5050]: I0131 05:33:40.927596 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovn-controller" containerID="cri-o://6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b" gracePeriod=30 Jan 31 05:33:40 crc kubenswrapper[5050]: I0131 05:33:40.927648 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="northd" containerID="cri-o://9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425" gracePeriod=30 Jan 31 05:33:40 crc kubenswrapper[5050]: I0131 05:33:40.927735 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0" gracePeriod=30 Jan 31 05:33:40 crc kubenswrapper[5050]: I0131 05:33:40.927788 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovn-acl-logging" containerID="cri-o://3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16" gracePeriod=30 Jan 31 05:33:40 crc kubenswrapper[5050]: I0131 05:33:40.927697 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="kube-rbac-proxy-node" containerID="cri-o://dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd" gracePeriod=30 Jan 31 05:33:40 crc kubenswrapper[5050]: I0131 05:33:40.927895 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="nbdb" containerID="cri-o://5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3" gracePeriod=30 Jan 31 05:33:40 crc kubenswrapper[5050]: I0131 05:33:40.927877 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="sbdb" containerID="cri-o://3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d" gracePeriod=30 Jan 31 05:33:40 crc kubenswrapper[5050]: I0131 05:33:40.966714 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" containerID="cri-o://27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a" gracePeriod=30 Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.066024 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/3.log" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.068311 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovn-acl-logging/0.log" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.069067 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovn-controller/0.log" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.069341 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0" exitCode=0 Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.069362 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16" exitCode=143 Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.069370 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b" exitCode=143 Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.069401 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0"} Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.069424 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16"} Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.069434 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b"} Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.070674 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgpmd_eeb03b23-b94b-4aaf-aac2-a04db399ec55/kube-multus/2.log" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.071269 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgpmd_eeb03b23-b94b-4aaf-aac2-a04db399ec55/kube-multus/1.log" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.071292 5050 generic.go:334] "Generic (PLEG): container finished" podID="eeb03b23-b94b-4aaf-aac2-a04db399ec55" containerID="ac8fc87d22a662d586d590e706ecab572ece682431bb937e264475a7f7d39130" exitCode=2 Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.071305 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgpmd" event={"ID":"eeb03b23-b94b-4aaf-aac2-a04db399ec55","Type":"ContainerDied","Data":"ac8fc87d22a662d586d590e706ecab572ece682431bb937e264475a7f7d39130"} Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.071325 5050 scope.go:117] "RemoveContainer" containerID="bd606c10b8ebaae532179c232f96419cbbf8ce65dfddf7186a5f92ae8b54d966" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.071612 5050 scope.go:117] "RemoveContainer" containerID="ac8fc87d22a662d586d590e706ecab572ece682431bb937e264475a7f7d39130" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.266420 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/3.log" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.269859 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovn-acl-logging/0.log" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.271530 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovn-controller/0.log" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.272137 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332300 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rjvnx"] Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.332522 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332544 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.332563 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="nbdb" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332573 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="nbdb" Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.332586 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="northd" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332594 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="northd" Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.332604 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332611 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.332623 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332631 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.332641 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="kube-rbac-proxy-node" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332650 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="kube-rbac-proxy-node" Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.332662 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="kubecfg-setup" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332670 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="kubecfg-setup" Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.332678 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovn-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332686 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovn-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.332696 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332703 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.332713 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332720 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.332728 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovn-acl-logging" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332735 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovn-acl-logging" Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.332745 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="sbdb" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332753 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="sbdb" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332858 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="nbdb" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332871 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332881 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovn-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332888 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332898 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="sbdb" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332906 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="northd" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332916 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332927 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="kube-rbac-proxy-node" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332936 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.332946 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovn-acl-logging" Jan 31 05:33:41 crc kubenswrapper[5050]: E0131 05:33:41.333230 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.333241 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.333340 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.333587 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerName="ovnkube-controller" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.335191 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.372869 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-log-socket\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.372927 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-run-netns\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.372973 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-cni-bin\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373003 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-run-ovn-kubernetes\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373030 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-log-socket" (OuterVolumeSpecName: "log-socket") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373049 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovnkube-script-lib\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373073 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-systemd-units\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373080 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373092 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373103 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-kubelet\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373103 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373134 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373143 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373164 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-openvswitch\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373189 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-ovn\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373212 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-cni-netd\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373233 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwcbj\" (UniqueName: \"kubernetes.io/projected/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-kube-api-access-lwcbj\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373248 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373257 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373261 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-env-overrides\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373276 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373328 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373382 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovnkube-config\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373411 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-etc-openvswitch\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373421 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373437 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovn-node-metrics-cert\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373460 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-slash\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373482 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-systemd\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373508 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-node-log\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373531 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-var-lib-openvswitch\") pod \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\" (UID: \"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e\") " Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373529 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-slash" (OuterVolumeSpecName: "host-slash") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373561 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373626 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-node-log" (OuterVolumeSpecName: "node-log") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373664 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373675 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsh4c\" (UniqueName: \"kubernetes.io/projected/ad574d35-14ed-408a-affd-e5fbe9724bda-kube-api-access-rsh4c\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373702 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373715 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373709 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373716 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373785 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-kubelet\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373816 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-etc-openvswitch\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373840 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-systemd-units\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373857 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad574d35-14ed-408a-affd-e5fbe9724bda-env-overrides\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373875 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-run-ovn\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373892 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad574d35-14ed-408a-affd-e5fbe9724bda-ovn-node-metrics-cert\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.373916 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-cni-bin\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374019 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-var-lib-openvswitch\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374109 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-run-ovn-kubernetes\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374156 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-slash\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374191 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad574d35-14ed-408a-affd-e5fbe9724bda-ovnkube-config\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374218 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-node-log\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374309 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-run-systemd\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374337 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-cni-netd\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374358 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-log-socket\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374391 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-run-netns\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374426 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad574d35-14ed-408a-affd-e5fbe9724bda-ovnkube-script-lib\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374455 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-run-openvswitch\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374539 5050 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374560 5050 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374578 5050 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374592 5050 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374607 5050 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374622 5050 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374636 5050 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374651 5050 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374666 5050 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374681 5050 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374696 5050 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374709 5050 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374723 5050 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374737 5050 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374751 5050 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374765 5050 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.374779 5050 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.379727 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-kube-api-access-lwcbj" (OuterVolumeSpecName: "kube-api-access-lwcbj") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "kube-api-access-lwcbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.381882 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.387285 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" (UID: "7d29ecd7-304b-4356-9f7c-c4d8d4ee809e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.475944 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-run-ovn-kubernetes\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476046 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-slash\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476092 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad574d35-14ed-408a-affd-e5fbe9724bda-ovnkube-config\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476131 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-node-log\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476194 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-run-systemd\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476225 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-cni-netd\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476262 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-log-socket\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476297 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-run-netns\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476339 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad574d35-14ed-408a-affd-e5fbe9724bda-ovnkube-script-lib\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476377 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-run-openvswitch\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476419 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsh4c\" (UniqueName: \"kubernetes.io/projected/ad574d35-14ed-408a-affd-e5fbe9724bda-kube-api-access-rsh4c\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476452 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476479 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-kubelet\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476514 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-etc-openvswitch\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476552 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-systemd-units\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476588 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad574d35-14ed-408a-affd-e5fbe9724bda-env-overrides\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476627 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-run-ovn\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476665 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad574d35-14ed-408a-affd-e5fbe9724bda-ovn-node-metrics-cert\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476781 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-cni-bin\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.476828 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-var-lib-openvswitch\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.477415 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwcbj\" (UniqueName: \"kubernetes.io/projected/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-kube-api-access-lwcbj\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.477442 5050 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.477461 5050 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.477560 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-var-lib-openvswitch\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.477643 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-run-ovn-kubernetes\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.477704 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-slash\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.478746 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad574d35-14ed-408a-affd-e5fbe9724bda-ovnkube-config\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.478849 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-node-log\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.478908 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-run-systemd\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.478985 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-cni-netd\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.479040 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-etc-openvswitch\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.479091 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-kubelet\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.479137 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-run-ovn\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.479223 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-log-socket\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.479241 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-systemd-units\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.479244 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-run-netns\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.479306 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-run-openvswitch\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.479512 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-cni-bin\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.479555 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad574d35-14ed-408a-affd-e5fbe9724bda-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.480170 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad574d35-14ed-408a-affd-e5fbe9724bda-env-overrides\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.480659 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad574d35-14ed-408a-affd-e5fbe9724bda-ovnkube-script-lib\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.487151 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad574d35-14ed-408a-affd-e5fbe9724bda-ovn-node-metrics-cert\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.517298 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsh4c\" (UniqueName: \"kubernetes.io/projected/ad574d35-14ed-408a-affd-e5fbe9724bda-kube-api-access-rsh4c\") pod \"ovnkube-node-rjvnx\" (UID: \"ad574d35-14ed-408a-affd-e5fbe9724bda\") " pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.637133 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-m5rsc" Jan 31 05:33:41 crc kubenswrapper[5050]: I0131 05:33:41.650587 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:41 crc kubenswrapper[5050]: W0131 05:33:41.678622 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad574d35_14ed_408a_affd_e5fbe9724bda.slice/crio-09d4f88e3256d3f3673e6f56a534807410a042c7dd9b8785dffea60c3aa4d479 WatchSource:0}: Error finding container 09d4f88e3256d3f3673e6f56a534807410a042c7dd9b8785dffea60c3aa4d479: Status 404 returned error can't find the container with id 09d4f88e3256d3f3673e6f56a534807410a042c7dd9b8785dffea60c3aa4d479 Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.079558 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" event={"ID":"ad574d35-14ed-408a-affd-e5fbe9724bda","Type":"ContainerStarted","Data":"09d4f88e3256d3f3673e6f56a534807410a042c7dd9b8785dffea60c3aa4d479"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.084511 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovnkube-controller/3.log" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.087803 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovn-acl-logging/0.log" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.088672 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8hx4t_7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/ovn-controller/0.log" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.089528 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a" exitCode=0 Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.089576 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d" exitCode=0 Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.089590 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3" exitCode=0 Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.089588 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.089656 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.089680 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.089700 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.089724 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.089728 5050 scope.go:117] "RemoveContainer" containerID="27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.089605 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425" exitCode=0 Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.090261 5050 generic.go:334] "Generic (PLEG): container finished" podID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" containerID="dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd" exitCode=0 Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.090297 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.090340 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8hx4t" event={"ID":"7d29ecd7-304b-4356-9f7c-c4d8d4ee809e","Type":"ContainerDied","Data":"5446c51fa9c4ee345a3da3236428890e4de6be73d56fc0d8300a97a00cd6a33f"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.090364 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.090382 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.090394 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.090406 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.090418 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.090429 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.090440 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.090452 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.090463 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.095584 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgpmd_eeb03b23-b94b-4aaf-aac2-a04db399ec55/kube-multus/2.log" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.095649 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgpmd" event={"ID":"eeb03b23-b94b-4aaf-aac2-a04db399ec55","Type":"ContainerStarted","Data":"bafda729033b7f5b2ebb8f65b67d7547ee55014f85c11f39eb0ef4005c38771d"} Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.111851 5050 scope.go:117] "RemoveContainer" containerID="85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.149423 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8hx4t"] Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.151478 5050 scope.go:117] "RemoveContainer" containerID="3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.154049 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8hx4t"] Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.170862 5050 scope.go:117] "RemoveContainer" containerID="5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.182662 5050 scope.go:117] "RemoveContainer" containerID="9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.200507 5050 scope.go:117] "RemoveContainer" containerID="76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.221086 5050 scope.go:117] "RemoveContainer" containerID="dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.295164 5050 scope.go:117] "RemoveContainer" containerID="3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.328597 5050 scope.go:117] "RemoveContainer" containerID="6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.352922 5050 scope.go:117] "RemoveContainer" containerID="d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.373753 5050 scope.go:117] "RemoveContainer" containerID="27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a" Jan 31 05:33:42 crc kubenswrapper[5050]: E0131 05:33:42.374337 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a\": container with ID starting with 27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a not found: ID does not exist" containerID="27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.374393 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a"} err="failed to get container status \"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a\": rpc error: code = NotFound desc = could not find container \"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a\": container with ID starting with 27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.374430 5050 scope.go:117] "RemoveContainer" containerID="85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284" Jan 31 05:33:42 crc kubenswrapper[5050]: E0131 05:33:42.374799 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\": container with ID starting with 85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284 not found: ID does not exist" containerID="85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.374848 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284"} err="failed to get container status \"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\": rpc error: code = NotFound desc = could not find container \"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\": container with ID starting with 85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.374878 5050 scope.go:117] "RemoveContainer" containerID="3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d" Jan 31 05:33:42 crc kubenswrapper[5050]: E0131 05:33:42.375417 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\": container with ID starting with 3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d not found: ID does not exist" containerID="3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.376104 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d"} err="failed to get container status \"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\": rpc error: code = NotFound desc = could not find container \"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\": container with ID starting with 3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.376188 5050 scope.go:117] "RemoveContainer" containerID="5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3" Jan 31 05:33:42 crc kubenswrapper[5050]: E0131 05:33:42.376781 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\": container with ID starting with 5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3 not found: ID does not exist" containerID="5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.376892 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3"} err="failed to get container status \"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\": rpc error: code = NotFound desc = could not find container \"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\": container with ID starting with 5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.376927 5050 scope.go:117] "RemoveContainer" containerID="9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425" Jan 31 05:33:42 crc kubenswrapper[5050]: E0131 05:33:42.377517 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\": container with ID starting with 9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425 not found: ID does not exist" containerID="9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.377579 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425"} err="failed to get container status \"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\": rpc error: code = NotFound desc = could not find container \"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\": container with ID starting with 9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.377620 5050 scope.go:117] "RemoveContainer" containerID="76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0" Jan 31 05:33:42 crc kubenswrapper[5050]: E0131 05:33:42.378052 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\": container with ID starting with 76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0 not found: ID does not exist" containerID="76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.378126 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0"} err="failed to get container status \"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\": rpc error: code = NotFound desc = could not find container \"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\": container with ID starting with 76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.378166 5050 scope.go:117] "RemoveContainer" containerID="dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd" Jan 31 05:33:42 crc kubenswrapper[5050]: E0131 05:33:42.378500 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\": container with ID starting with dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd not found: ID does not exist" containerID="dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.378541 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd"} err="failed to get container status \"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\": rpc error: code = NotFound desc = could not find container \"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\": container with ID starting with dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.378561 5050 scope.go:117] "RemoveContainer" containerID="3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16" Jan 31 05:33:42 crc kubenswrapper[5050]: E0131 05:33:42.378988 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\": container with ID starting with 3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16 not found: ID does not exist" containerID="3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.379050 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16"} err="failed to get container status \"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\": rpc error: code = NotFound desc = could not find container \"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\": container with ID starting with 3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.379089 5050 scope.go:117] "RemoveContainer" containerID="6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b" Jan 31 05:33:42 crc kubenswrapper[5050]: E0131 05:33:42.379489 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\": container with ID starting with 6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b not found: ID does not exist" containerID="6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.379557 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b"} err="failed to get container status \"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\": rpc error: code = NotFound desc = could not find container \"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\": container with ID starting with 6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.379597 5050 scope.go:117] "RemoveContainer" containerID="d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237" Jan 31 05:33:42 crc kubenswrapper[5050]: E0131 05:33:42.379921 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\": container with ID starting with d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237 not found: ID does not exist" containerID="d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.380183 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237"} err="failed to get container status \"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\": rpc error: code = NotFound desc = could not find container \"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\": container with ID starting with d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.380228 5050 scope.go:117] "RemoveContainer" containerID="27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.380606 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a"} err="failed to get container status \"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a\": rpc error: code = NotFound desc = could not find container \"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a\": container with ID starting with 27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.380665 5050 scope.go:117] "RemoveContainer" containerID="85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.381040 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284"} err="failed to get container status \"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\": rpc error: code = NotFound desc = could not find container \"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\": container with ID starting with 85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.381089 5050 scope.go:117] "RemoveContainer" containerID="3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.381499 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d"} err="failed to get container status \"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\": rpc error: code = NotFound desc = could not find container \"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\": container with ID starting with 3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.381559 5050 scope.go:117] "RemoveContainer" containerID="5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.382037 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3"} err="failed to get container status \"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\": rpc error: code = NotFound desc = could not find container \"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\": container with ID starting with 5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.382079 5050 scope.go:117] "RemoveContainer" containerID="9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.382630 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425"} err="failed to get container status \"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\": rpc error: code = NotFound desc = could not find container \"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\": container with ID starting with 9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.382665 5050 scope.go:117] "RemoveContainer" containerID="76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.383116 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0"} err="failed to get container status \"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\": rpc error: code = NotFound desc = could not find container \"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\": container with ID starting with 76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.383149 5050 scope.go:117] "RemoveContainer" containerID="dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.383816 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd"} err="failed to get container status \"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\": rpc error: code = NotFound desc = could not find container \"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\": container with ID starting with dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.383863 5050 scope.go:117] "RemoveContainer" containerID="3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.384354 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16"} err="failed to get container status \"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\": rpc error: code = NotFound desc = could not find container \"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\": container with ID starting with 3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.384422 5050 scope.go:117] "RemoveContainer" containerID="6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.384762 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b"} err="failed to get container status \"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\": rpc error: code = NotFound desc = could not find container \"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\": container with ID starting with 6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.384797 5050 scope.go:117] "RemoveContainer" containerID="d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.385845 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237"} err="failed to get container status \"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\": rpc error: code = NotFound desc = could not find container \"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\": container with ID starting with d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.385927 5050 scope.go:117] "RemoveContainer" containerID="27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.386890 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a"} err="failed to get container status \"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a\": rpc error: code = NotFound desc = could not find container \"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a\": container with ID starting with 27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.386922 5050 scope.go:117] "RemoveContainer" containerID="85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.387460 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284"} err="failed to get container status \"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\": rpc error: code = NotFound desc = could not find container \"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\": container with ID starting with 85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.387494 5050 scope.go:117] "RemoveContainer" containerID="3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.387861 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d"} err="failed to get container status \"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\": rpc error: code = NotFound desc = could not find container \"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\": container with ID starting with 3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.387900 5050 scope.go:117] "RemoveContainer" containerID="5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.388195 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3"} err="failed to get container status \"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\": rpc error: code = NotFound desc = could not find container \"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\": container with ID starting with 5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.388236 5050 scope.go:117] "RemoveContainer" containerID="9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.388561 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425"} err="failed to get container status \"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\": rpc error: code = NotFound desc = could not find container \"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\": container with ID starting with 9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.388602 5050 scope.go:117] "RemoveContainer" containerID="76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.388890 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0"} err="failed to get container status \"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\": rpc error: code = NotFound desc = could not find container \"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\": container with ID starting with 76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.388922 5050 scope.go:117] "RemoveContainer" containerID="dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.390186 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd"} err="failed to get container status \"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\": rpc error: code = NotFound desc = could not find container \"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\": container with ID starting with dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.390286 5050 scope.go:117] "RemoveContainer" containerID="3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.391271 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16"} err="failed to get container status \"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\": rpc error: code = NotFound desc = could not find container \"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\": container with ID starting with 3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.391301 5050 scope.go:117] "RemoveContainer" containerID="6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.391585 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b"} err="failed to get container status \"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\": rpc error: code = NotFound desc = could not find container \"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\": container with ID starting with 6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.391613 5050 scope.go:117] "RemoveContainer" containerID="d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.391917 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237"} err="failed to get container status \"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\": rpc error: code = NotFound desc = could not find container \"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\": container with ID starting with d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.391990 5050 scope.go:117] "RemoveContainer" containerID="27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.392378 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a"} err="failed to get container status \"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a\": rpc error: code = NotFound desc = could not find container \"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a\": container with ID starting with 27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.392411 5050 scope.go:117] "RemoveContainer" containerID="85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.392746 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284"} err="failed to get container status \"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\": rpc error: code = NotFound desc = could not find container \"85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284\": container with ID starting with 85028d24dd9a574b6ffd4f6f5f869c022710455b6c1b7aa547adc5fc3d8b6284 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.392780 5050 scope.go:117] "RemoveContainer" containerID="3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.393117 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d"} err="failed to get container status \"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\": rpc error: code = NotFound desc = could not find container \"3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d\": container with ID starting with 3407cd491ea15205881768266f5d7117425db332cea622b76c6b3417c5bf579d not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.393157 5050 scope.go:117] "RemoveContainer" containerID="5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.393451 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3"} err="failed to get container status \"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\": rpc error: code = NotFound desc = could not find container \"5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3\": container with ID starting with 5e2df5915a6480e26eeda6a9a5436f43d2f9eb8b446633c8debdaa9d79c5e2e3 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.393485 5050 scope.go:117] "RemoveContainer" containerID="9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.393795 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425"} err="failed to get container status \"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\": rpc error: code = NotFound desc = could not find container \"9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425\": container with ID starting with 9dec6e7437a884116e57919576ad825cc20044fd97fffa6ff0547d28e0ccf425 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.393836 5050 scope.go:117] "RemoveContainer" containerID="76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.394110 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0"} err="failed to get container status \"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\": rpc error: code = NotFound desc = could not find container \"76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0\": container with ID starting with 76c5ff6eb5b0591db670fc8d3d2d2b67baa86f688c13f6197368d4ff4cf2a8a0 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.394140 5050 scope.go:117] "RemoveContainer" containerID="dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.394599 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd"} err="failed to get container status \"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\": rpc error: code = NotFound desc = could not find container \"dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd\": container with ID starting with dab8ab3b56f44342cfdc2787b763f822c5d8c59cf36ea12f2f5bc2cd54eb8bbd not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.394645 5050 scope.go:117] "RemoveContainer" containerID="3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.395078 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16"} err="failed to get container status \"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\": rpc error: code = NotFound desc = could not find container \"3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16\": container with ID starting with 3dafa4666b97c107601c0fb84e28772115c5ee0c742a7e3c3c2fe4f4bd406d16 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.395120 5050 scope.go:117] "RemoveContainer" containerID="6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.395460 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b"} err="failed to get container status \"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\": rpc error: code = NotFound desc = could not find container \"6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b\": container with ID starting with 6ffc3fbae7901d476cd261befaa1d60d88bb1a38c554871774673f0fddab725b not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.395505 5050 scope.go:117] "RemoveContainer" containerID="d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.395802 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237"} err="failed to get container status \"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\": rpc error: code = NotFound desc = could not find container \"d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237\": container with ID starting with d7c8d69bac1f72df4e96fd3a9ebd06a5165507e7ebfd9094fb850c945934f237 not found: ID does not exist" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.395830 5050 scope.go:117] "RemoveContainer" containerID="27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a" Jan 31 05:33:42 crc kubenswrapper[5050]: I0131 05:33:42.396327 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a"} err="failed to get container status \"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a\": rpc error: code = NotFound desc = could not find container \"27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a\": container with ID starting with 27da1ba78e57a5618f85278ade58fd5a0354030dcdc2223077d88703799a160a not found: ID does not exist" Jan 31 05:33:43 crc kubenswrapper[5050]: I0131 05:33:43.105979 5050 generic.go:334] "Generic (PLEG): container finished" podID="ad574d35-14ed-408a-affd-e5fbe9724bda" containerID="b76fa4bc303ffc31e5f3b9f4c2bf43a6c54cd0fa9cd08f90970de9d569e4c326" exitCode=0 Jan 31 05:33:43 crc kubenswrapper[5050]: I0131 05:33:43.106036 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" event={"ID":"ad574d35-14ed-408a-affd-e5fbe9724bda","Type":"ContainerDied","Data":"b76fa4bc303ffc31e5f3b9f4c2bf43a6c54cd0fa9cd08f90970de9d569e4c326"} Jan 31 05:33:43 crc kubenswrapper[5050]: I0131 05:33:43.748579 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d29ecd7-304b-4356-9f7c-c4d8d4ee809e" path="/var/lib/kubelet/pods/7d29ecd7-304b-4356-9f7c-c4d8d4ee809e/volumes" Jan 31 05:33:44 crc kubenswrapper[5050]: I0131 05:33:44.116723 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" event={"ID":"ad574d35-14ed-408a-affd-e5fbe9724bda","Type":"ContainerStarted","Data":"742b8f72a7e48eba260fa10fe726644935ca2af51f02fd113c597560d85a0ff3"} Jan 31 05:33:44 crc kubenswrapper[5050]: I0131 05:33:44.117077 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" event={"ID":"ad574d35-14ed-408a-affd-e5fbe9724bda","Type":"ContainerStarted","Data":"3d404b23e1efb1729a465ba5d4fb5c643de3f4d3f92eb5e58b4bbafad6c702ec"} Jan 31 05:33:44 crc kubenswrapper[5050]: I0131 05:33:44.117089 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" event={"ID":"ad574d35-14ed-408a-affd-e5fbe9724bda","Type":"ContainerStarted","Data":"5a156889f9e48cc8324f4028b487f3460f006ba72f6c54d5bc6fd90666c2de56"} Jan 31 05:33:44 crc kubenswrapper[5050]: I0131 05:33:44.117097 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" event={"ID":"ad574d35-14ed-408a-affd-e5fbe9724bda","Type":"ContainerStarted","Data":"98fce91d3023df057f97a4933fdee3e77640b0c53fc76411b2913f05e3648052"} Jan 31 05:33:44 crc kubenswrapper[5050]: I0131 05:33:44.117105 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" event={"ID":"ad574d35-14ed-408a-affd-e5fbe9724bda","Type":"ContainerStarted","Data":"c3007d0fdd499193cac1e1057cc7b537f57bd1f22703975700d98918ef977ceb"} Jan 31 05:33:45 crc kubenswrapper[5050]: I0131 05:33:45.126285 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" event={"ID":"ad574d35-14ed-408a-affd-e5fbe9724bda","Type":"ContainerStarted","Data":"a952de709afded936826be3001809563b2bd2a0d7ba0ca12411ea83e34a64c3c"} Jan 31 05:33:47 crc kubenswrapper[5050]: I0131 05:33:47.144862 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" event={"ID":"ad574d35-14ed-408a-affd-e5fbe9724bda","Type":"ContainerStarted","Data":"d59170aaa7610bb6e1843a3648cb7407f23d067a3b52b549f27df9cd910f8c3e"} Jan 31 05:33:49 crc kubenswrapper[5050]: I0131 05:33:49.162078 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" event={"ID":"ad574d35-14ed-408a-affd-e5fbe9724bda","Type":"ContainerStarted","Data":"0b72d124faf2739a6f2f99ed804d5e539101abe5f9342b6928e31835211b7663"} Jan 31 05:33:49 crc kubenswrapper[5050]: I0131 05:33:49.162340 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:49 crc kubenswrapper[5050]: I0131 05:33:49.162351 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:49 crc kubenswrapper[5050]: I0131 05:33:49.162359 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:49 crc kubenswrapper[5050]: I0131 05:33:49.201061 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:49 crc kubenswrapper[5050]: I0131 05:33:49.205316 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:33:49 crc kubenswrapper[5050]: I0131 05:33:49.207147 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" podStartSLOduration=8.207130469 podStartE2EDuration="8.207130469s" podCreationTimestamp="2026-01-31 05:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:33:49.204597739 +0000 UTC m=+754.253759365" watchObservedRunningTime="2026-01-31 05:33:49.207130469 +0000 UTC m=+754.256292065" Jan 31 05:33:50 crc kubenswrapper[5050]: I0131 05:33:50.046113 5050 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 05:34:09 crc kubenswrapper[5050]: I0131 05:34:09.017775 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:34:09 crc kubenswrapper[5050]: I0131 05:34:09.018398 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:34:11 crc kubenswrapper[5050]: I0131 05:34:11.686679 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rjvnx" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.195059 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx"] Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.198059 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.204757 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.232915 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx"] Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.378603 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c90b3e-2181-426e-aee2-e92a2694ac1c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx\" (UID: \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.378690 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvlbp\" (UniqueName: \"kubernetes.io/projected/e6c90b3e-2181-426e-aee2-e92a2694ac1c-kube-api-access-bvlbp\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx\" (UID: \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.378743 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c90b3e-2181-426e-aee2-e92a2694ac1c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx\" (UID: \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.479862 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c90b3e-2181-426e-aee2-e92a2694ac1c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx\" (UID: \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.479936 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvlbp\" (UniqueName: \"kubernetes.io/projected/e6c90b3e-2181-426e-aee2-e92a2694ac1c-kube-api-access-bvlbp\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx\" (UID: \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.480606 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c90b3e-2181-426e-aee2-e92a2694ac1c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx\" (UID: \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.481618 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c90b3e-2181-426e-aee2-e92a2694ac1c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx\" (UID: \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.482047 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c90b3e-2181-426e-aee2-e92a2694ac1c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx\" (UID: \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.517281 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvlbp\" (UniqueName: \"kubernetes.io/projected/e6c90b3e-2181-426e-aee2-e92a2694ac1c-kube-api-access-bvlbp\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx\" (UID: \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.527401 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:23 crc kubenswrapper[5050]: I0131 05:34:23.820170 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx"] Jan 31 05:34:24 crc kubenswrapper[5050]: I0131 05:34:24.422378 5050 generic.go:334] "Generic (PLEG): container finished" podID="e6c90b3e-2181-426e-aee2-e92a2694ac1c" containerID="77722a4004f2c3a33b854a2478c9bd49b9bce9fd3326f1027febf07ddeaa2b29" exitCode=0 Jan 31 05:34:24 crc kubenswrapper[5050]: I0131 05:34:24.422462 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" event={"ID":"e6c90b3e-2181-426e-aee2-e92a2694ac1c","Type":"ContainerDied","Data":"77722a4004f2c3a33b854a2478c9bd49b9bce9fd3326f1027febf07ddeaa2b29"} Jan 31 05:34:24 crc kubenswrapper[5050]: I0131 05:34:24.422531 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" event={"ID":"e6c90b3e-2181-426e-aee2-e92a2694ac1c","Type":"ContainerStarted","Data":"8ca7110e4b2d355f55563b7f9e58d59b6be63e6fe32b3247fd615d6355647a03"} Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.516776 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7jx94"] Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.520025 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.532174 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jx94"] Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.617194 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163b82b4-21b0-4c02-b09e-7985bc08fa11-catalog-content\") pod \"redhat-operators-7jx94\" (UID: \"163b82b4-21b0-4c02-b09e-7985bc08fa11\") " pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.617266 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163b82b4-21b0-4c02-b09e-7985bc08fa11-utilities\") pod \"redhat-operators-7jx94\" (UID: \"163b82b4-21b0-4c02-b09e-7985bc08fa11\") " pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.617349 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgslt\" (UniqueName: \"kubernetes.io/projected/163b82b4-21b0-4c02-b09e-7985bc08fa11-kube-api-access-jgslt\") pod \"redhat-operators-7jx94\" (UID: \"163b82b4-21b0-4c02-b09e-7985bc08fa11\") " pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.718222 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163b82b4-21b0-4c02-b09e-7985bc08fa11-utilities\") pod \"redhat-operators-7jx94\" (UID: \"163b82b4-21b0-4c02-b09e-7985bc08fa11\") " pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.718317 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgslt\" (UniqueName: \"kubernetes.io/projected/163b82b4-21b0-4c02-b09e-7985bc08fa11-kube-api-access-jgslt\") pod \"redhat-operators-7jx94\" (UID: \"163b82b4-21b0-4c02-b09e-7985bc08fa11\") " pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.718373 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163b82b4-21b0-4c02-b09e-7985bc08fa11-catalog-content\") pod \"redhat-operators-7jx94\" (UID: \"163b82b4-21b0-4c02-b09e-7985bc08fa11\") " pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.718738 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163b82b4-21b0-4c02-b09e-7985bc08fa11-utilities\") pod \"redhat-operators-7jx94\" (UID: \"163b82b4-21b0-4c02-b09e-7985bc08fa11\") " pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.719199 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163b82b4-21b0-4c02-b09e-7985bc08fa11-catalog-content\") pod \"redhat-operators-7jx94\" (UID: \"163b82b4-21b0-4c02-b09e-7985bc08fa11\") " pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.748050 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgslt\" (UniqueName: \"kubernetes.io/projected/163b82b4-21b0-4c02-b09e-7985bc08fa11-kube-api-access-jgslt\") pod \"redhat-operators-7jx94\" (UID: \"163b82b4-21b0-4c02-b09e-7985bc08fa11\") " pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:25 crc kubenswrapper[5050]: I0131 05:34:25.854880 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:26 crc kubenswrapper[5050]: I0131 05:34:26.083489 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jx94"] Jan 31 05:34:26 crc kubenswrapper[5050]: W0131 05:34:26.084891 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod163b82b4_21b0_4c02_b09e_7985bc08fa11.slice/crio-2dd52b6c5f9d0aaf4189e64b856282359bc7dcf5157ff01d1346380af30b5a9c WatchSource:0}: Error finding container 2dd52b6c5f9d0aaf4189e64b856282359bc7dcf5157ff01d1346380af30b5a9c: Status 404 returned error can't find the container with id 2dd52b6c5f9d0aaf4189e64b856282359bc7dcf5157ff01d1346380af30b5a9c Jan 31 05:34:26 crc kubenswrapper[5050]: I0131 05:34:26.441871 5050 generic.go:334] "Generic (PLEG): container finished" podID="e6c90b3e-2181-426e-aee2-e92a2694ac1c" containerID="107a84bbfb965eda2736a62ef1124ce3dfb6596f608c26083ed5ae0b92361a33" exitCode=0 Jan 31 05:34:26 crc kubenswrapper[5050]: I0131 05:34:26.442276 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" event={"ID":"e6c90b3e-2181-426e-aee2-e92a2694ac1c","Type":"ContainerDied","Data":"107a84bbfb965eda2736a62ef1124ce3dfb6596f608c26083ed5ae0b92361a33"} Jan 31 05:34:26 crc kubenswrapper[5050]: I0131 05:34:26.444658 5050 generic.go:334] "Generic (PLEG): container finished" podID="163b82b4-21b0-4c02-b09e-7985bc08fa11" containerID="332f361093f40649641e6ccaa9cf5e251110f9fedcfb3bafa25c68b9c436750e" exitCode=0 Jan 31 05:34:26 crc kubenswrapper[5050]: I0131 05:34:26.444709 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jx94" event={"ID":"163b82b4-21b0-4c02-b09e-7985bc08fa11","Type":"ContainerDied","Data":"332f361093f40649641e6ccaa9cf5e251110f9fedcfb3bafa25c68b9c436750e"} Jan 31 05:34:26 crc kubenswrapper[5050]: I0131 05:34:26.444746 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jx94" event={"ID":"163b82b4-21b0-4c02-b09e-7985bc08fa11","Type":"ContainerStarted","Data":"2dd52b6c5f9d0aaf4189e64b856282359bc7dcf5157ff01d1346380af30b5a9c"} Jan 31 05:34:27 crc kubenswrapper[5050]: I0131 05:34:27.462551 5050 generic.go:334] "Generic (PLEG): container finished" podID="e6c90b3e-2181-426e-aee2-e92a2694ac1c" containerID="942f916871975ab570f4398368302d862b4cd6ae05027be90cc0a170880a8a51" exitCode=0 Jan 31 05:34:27 crc kubenswrapper[5050]: I0131 05:34:27.462667 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" event={"ID":"e6c90b3e-2181-426e-aee2-e92a2694ac1c","Type":"ContainerDied","Data":"942f916871975ab570f4398368302d862b4cd6ae05027be90cc0a170880a8a51"} Jan 31 05:34:27 crc kubenswrapper[5050]: I0131 05:34:27.472781 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jx94" event={"ID":"163b82b4-21b0-4c02-b09e-7985bc08fa11","Type":"ContainerStarted","Data":"515703a64cb1a1e50ca3e5534f2fa7dadb5eee5efa9441b0e7c54187f7d2cc66"} Jan 31 05:34:28 crc kubenswrapper[5050]: I0131 05:34:28.481370 5050 generic.go:334] "Generic (PLEG): container finished" podID="163b82b4-21b0-4c02-b09e-7985bc08fa11" containerID="515703a64cb1a1e50ca3e5534f2fa7dadb5eee5efa9441b0e7c54187f7d2cc66" exitCode=0 Jan 31 05:34:28 crc kubenswrapper[5050]: I0131 05:34:28.481431 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jx94" event={"ID":"163b82b4-21b0-4c02-b09e-7985bc08fa11","Type":"ContainerDied","Data":"515703a64cb1a1e50ca3e5534f2fa7dadb5eee5efa9441b0e7c54187f7d2cc66"} Jan 31 05:34:28 crc kubenswrapper[5050]: I0131 05:34:28.773750 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:28 crc kubenswrapper[5050]: I0131 05:34:28.960924 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c90b3e-2181-426e-aee2-e92a2694ac1c-bundle\") pod \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\" (UID: \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\") " Jan 31 05:34:28 crc kubenswrapper[5050]: I0131 05:34:28.961006 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c90b3e-2181-426e-aee2-e92a2694ac1c-util\") pod \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\" (UID: \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\") " Jan 31 05:34:28 crc kubenswrapper[5050]: I0131 05:34:28.961045 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvlbp\" (UniqueName: \"kubernetes.io/projected/e6c90b3e-2181-426e-aee2-e92a2694ac1c-kube-api-access-bvlbp\") pod \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\" (UID: \"e6c90b3e-2181-426e-aee2-e92a2694ac1c\") " Jan 31 05:34:28 crc kubenswrapper[5050]: I0131 05:34:28.962155 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6c90b3e-2181-426e-aee2-e92a2694ac1c-bundle" (OuterVolumeSpecName: "bundle") pod "e6c90b3e-2181-426e-aee2-e92a2694ac1c" (UID: "e6c90b3e-2181-426e-aee2-e92a2694ac1c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:34:28 crc kubenswrapper[5050]: I0131 05:34:28.969456 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c90b3e-2181-426e-aee2-e92a2694ac1c-kube-api-access-bvlbp" (OuterVolumeSpecName: "kube-api-access-bvlbp") pod "e6c90b3e-2181-426e-aee2-e92a2694ac1c" (UID: "e6c90b3e-2181-426e-aee2-e92a2694ac1c"). InnerVolumeSpecName "kube-api-access-bvlbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:34:28 crc kubenswrapper[5050]: I0131 05:34:28.996092 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6c90b3e-2181-426e-aee2-e92a2694ac1c-util" (OuterVolumeSpecName: "util") pod "e6c90b3e-2181-426e-aee2-e92a2694ac1c" (UID: "e6c90b3e-2181-426e-aee2-e92a2694ac1c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:34:29 crc kubenswrapper[5050]: I0131 05:34:29.062644 5050 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c90b3e-2181-426e-aee2-e92a2694ac1c-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:34:29 crc kubenswrapper[5050]: I0131 05:34:29.062705 5050 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c90b3e-2181-426e-aee2-e92a2694ac1c-util\") on node \"crc\" DevicePath \"\"" Jan 31 05:34:29 crc kubenswrapper[5050]: I0131 05:34:29.062729 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvlbp\" (UniqueName: \"kubernetes.io/projected/e6c90b3e-2181-426e-aee2-e92a2694ac1c-kube-api-access-bvlbp\") on node \"crc\" DevicePath \"\"" Jan 31 05:34:29 crc kubenswrapper[5050]: I0131 05:34:29.489808 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" event={"ID":"e6c90b3e-2181-426e-aee2-e92a2694ac1c","Type":"ContainerDied","Data":"8ca7110e4b2d355f55563b7f9e58d59b6be63e6fe32b3247fd615d6355647a03"} Jan 31 05:34:29 crc kubenswrapper[5050]: I0131 05:34:29.489891 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ca7110e4b2d355f55563b7f9e58d59b6be63e6fe32b3247fd615d6355647a03" Jan 31 05:34:29 crc kubenswrapper[5050]: I0131 05:34:29.489835 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx" Jan 31 05:34:29 crc kubenswrapper[5050]: I0131 05:34:29.492251 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jx94" event={"ID":"163b82b4-21b0-4c02-b09e-7985bc08fa11","Type":"ContainerStarted","Data":"60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53"} Jan 31 05:34:29 crc kubenswrapper[5050]: I0131 05:34:29.519672 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7jx94" podStartSLOduration=2.093527343 podStartE2EDuration="4.519652182s" podCreationTimestamp="2026-01-31 05:34:25 +0000 UTC" firstStartedPulling="2026-01-31 05:34:26.459025908 +0000 UTC m=+791.508187504" lastFinishedPulling="2026-01-31 05:34:28.885150737 +0000 UTC m=+793.934312343" observedRunningTime="2026-01-31 05:34:29.512361442 +0000 UTC m=+794.561523058" watchObservedRunningTime="2026-01-31 05:34:29.519652182 +0000 UTC m=+794.568813788" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.621179 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jwqtz"] Jan 31 05:34:33 crc kubenswrapper[5050]: E0131 05:34:33.621763 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c90b3e-2181-426e-aee2-e92a2694ac1c" containerName="pull" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.621779 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c90b3e-2181-426e-aee2-e92a2694ac1c" containerName="pull" Jan 31 05:34:33 crc kubenswrapper[5050]: E0131 05:34:33.621788 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c90b3e-2181-426e-aee2-e92a2694ac1c" containerName="extract" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.621795 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c90b3e-2181-426e-aee2-e92a2694ac1c" containerName="extract" Jan 31 05:34:33 crc kubenswrapper[5050]: E0131 05:34:33.621824 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c90b3e-2181-426e-aee2-e92a2694ac1c" containerName="util" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.621832 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c90b3e-2181-426e-aee2-e92a2694ac1c" containerName="util" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.621988 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c90b3e-2181-426e-aee2-e92a2694ac1c" containerName="extract" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.622480 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jwqtz" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.624273 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.624307 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lhvnz" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.624573 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.638147 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jwqtz"] Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.718229 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9vz2\" (UniqueName: \"kubernetes.io/projected/4803e93f-9a9e-43eb-8d0f-671abc22f91a-kube-api-access-p9vz2\") pod \"nmstate-operator-646758c888-jwqtz\" (UID: \"4803e93f-9a9e-43eb-8d0f-671abc22f91a\") " pod="openshift-nmstate/nmstate-operator-646758c888-jwqtz" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.819012 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9vz2\" (UniqueName: \"kubernetes.io/projected/4803e93f-9a9e-43eb-8d0f-671abc22f91a-kube-api-access-p9vz2\") pod \"nmstate-operator-646758c888-jwqtz\" (UID: \"4803e93f-9a9e-43eb-8d0f-671abc22f91a\") " pod="openshift-nmstate/nmstate-operator-646758c888-jwqtz" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.852189 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9vz2\" (UniqueName: \"kubernetes.io/projected/4803e93f-9a9e-43eb-8d0f-671abc22f91a-kube-api-access-p9vz2\") pod \"nmstate-operator-646758c888-jwqtz\" (UID: \"4803e93f-9a9e-43eb-8d0f-671abc22f91a\") " pod="openshift-nmstate/nmstate-operator-646758c888-jwqtz" Jan 31 05:34:33 crc kubenswrapper[5050]: I0131 05:34:33.935702 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jwqtz" Jan 31 05:34:34 crc kubenswrapper[5050]: I0131 05:34:34.179451 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jwqtz"] Jan 31 05:34:34 crc kubenswrapper[5050]: I0131 05:34:34.525191 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-jwqtz" event={"ID":"4803e93f-9a9e-43eb-8d0f-671abc22f91a","Type":"ContainerStarted","Data":"44681a3f3f33a97d0feed858f7e878b27ccad0ce82cb68ea2f3e1ae7278e161c"} Jan 31 05:34:35 crc kubenswrapper[5050]: I0131 05:34:35.856004 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:35 crc kubenswrapper[5050]: I0131 05:34:35.856401 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:36 crc kubenswrapper[5050]: I0131 05:34:36.908783 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7jx94" podUID="163b82b4-21b0-4c02-b09e-7985bc08fa11" containerName="registry-server" probeResult="failure" output=< Jan 31 05:34:36 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 05:34:36 crc kubenswrapper[5050]: > Jan 31 05:34:37 crc kubenswrapper[5050]: I0131 05:34:37.543127 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-jwqtz" event={"ID":"4803e93f-9a9e-43eb-8d0f-671abc22f91a","Type":"ContainerStarted","Data":"5bfbd1e20a67a82f2f9f23c5423a6fd7599d315cf68edc11c45555ce142bf768"} Jan 31 05:34:37 crc kubenswrapper[5050]: I0131 05:34:37.569672 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-jwqtz" podStartSLOduration=2.347071442 podStartE2EDuration="4.569643017s" podCreationTimestamp="2026-01-31 05:34:33 +0000 UTC" firstStartedPulling="2026-01-31 05:34:34.191627318 +0000 UTC m=+799.240788914" lastFinishedPulling="2026-01-31 05:34:36.414198893 +0000 UTC m=+801.463360489" observedRunningTime="2026-01-31 05:34:37.567398819 +0000 UTC m=+802.616560425" watchObservedRunningTime="2026-01-31 05:34:37.569643017 +0000 UTC m=+802.618804663" Jan 31 05:34:39 crc kubenswrapper[5050]: I0131 05:34:39.018270 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:34:39 crc kubenswrapper[5050]: I0131 05:34:39.018691 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:34:39 crc kubenswrapper[5050]: I0131 05:34:39.018755 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:34:39 crc kubenswrapper[5050]: I0131 05:34:39.019602 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fda1476157f97a2d389aaeaa03f696c709d711388e30f77ab369ecc733af733"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:34:39 crc kubenswrapper[5050]: I0131 05:34:39.019703 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://8fda1476157f97a2d389aaeaa03f696c709d711388e30f77ab369ecc733af733" gracePeriod=600 Jan 31 05:34:40 crc kubenswrapper[5050]: I0131 05:34:40.563528 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="8fda1476157f97a2d389aaeaa03f696c709d711388e30f77ab369ecc733af733" exitCode=0 Jan 31 05:34:40 crc kubenswrapper[5050]: I0131 05:34:40.563539 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"8fda1476157f97a2d389aaeaa03f696c709d711388e30f77ab369ecc733af733"} Jan 31 05:34:40 crc kubenswrapper[5050]: I0131 05:34:40.563598 5050 scope.go:117] "RemoveContainer" containerID="15581d05502b7653eca57913422553640138d30ed6c6d91517e5fec43402b57c" Jan 31 05:34:41 crc kubenswrapper[5050]: I0131 05:34:41.572002 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"28ca310875e65cf5e9290eaf5b0d71245b16dc8b0b1ac33324bea4c715946d1f"} Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.553402 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vkqdm"] Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.554563 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vkqdm" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.557002 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-skfg4" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.576313 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l"] Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.577024 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.578977 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.597213 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l"] Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.607104 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9bxhq"] Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.616232 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.643620 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7t7n\" (UniqueName: \"kubernetes.io/projected/fb76c612-cfce-47d5-adaa-d7b10661b9ca-kube-api-access-k7t7n\") pod \"nmstate-metrics-54757c584b-vkqdm\" (UID: \"fb76c612-cfce-47d5-adaa-d7b10661b9ca\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vkqdm" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.655980 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vkqdm"] Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.726754 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk"] Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.727859 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.730324 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.730753 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.731156 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-8zl76" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.733925 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk"] Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.745268 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/68a91a5f-abd1-4f99-8417-e208ef75a82e-dbus-socket\") pod \"nmstate-handler-9bxhq\" (UID: \"68a91a5f-abd1-4f99-8417-e208ef75a82e\") " pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.745344 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhszq\" (UniqueName: \"kubernetes.io/projected/048078df-7c17-42ac-96bd-ddcbe64854d3-kube-api-access-xhszq\") pod \"nmstate-webhook-8474b5b9d8-fxx9l\" (UID: \"048078df-7c17-42ac-96bd-ddcbe64854d3\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.745395 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m4kb\" (UniqueName: \"kubernetes.io/projected/68a91a5f-abd1-4f99-8417-e208ef75a82e-kube-api-access-8m4kb\") pod \"nmstate-handler-9bxhq\" (UID: \"68a91a5f-abd1-4f99-8417-e208ef75a82e\") " pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.745416 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/68a91a5f-abd1-4f99-8417-e208ef75a82e-nmstate-lock\") pod \"nmstate-handler-9bxhq\" (UID: \"68a91a5f-abd1-4f99-8417-e208ef75a82e\") " pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.745445 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7t7n\" (UniqueName: \"kubernetes.io/projected/fb76c612-cfce-47d5-adaa-d7b10661b9ca-kube-api-access-k7t7n\") pod \"nmstate-metrics-54757c584b-vkqdm\" (UID: \"fb76c612-cfce-47d5-adaa-d7b10661b9ca\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vkqdm" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.745470 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/048078df-7c17-42ac-96bd-ddcbe64854d3-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fxx9l\" (UID: \"048078df-7c17-42ac-96bd-ddcbe64854d3\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.745490 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/68a91a5f-abd1-4f99-8417-e208ef75a82e-ovs-socket\") pod \"nmstate-handler-9bxhq\" (UID: \"68a91a5f-abd1-4f99-8417-e208ef75a82e\") " pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.774053 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7t7n\" (UniqueName: \"kubernetes.io/projected/fb76c612-cfce-47d5-adaa-d7b10661b9ca-kube-api-access-k7t7n\") pod \"nmstate-metrics-54757c584b-vkqdm\" (UID: \"fb76c612-cfce-47d5-adaa-d7b10661b9ca\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vkqdm" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.846723 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m4kb\" (UniqueName: \"kubernetes.io/projected/68a91a5f-abd1-4f99-8417-e208ef75a82e-kube-api-access-8m4kb\") pod \"nmstate-handler-9bxhq\" (UID: \"68a91a5f-abd1-4f99-8417-e208ef75a82e\") " pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.846772 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/68a91a5f-abd1-4f99-8417-e208ef75a82e-nmstate-lock\") pod \"nmstate-handler-9bxhq\" (UID: \"68a91a5f-abd1-4f99-8417-e208ef75a82e\") " pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.846793 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/048078df-7c17-42ac-96bd-ddcbe64854d3-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fxx9l\" (UID: \"048078df-7c17-42ac-96bd-ddcbe64854d3\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.846808 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/68a91a5f-abd1-4f99-8417-e208ef75a82e-ovs-socket\") pod \"nmstate-handler-9bxhq\" (UID: \"68a91a5f-abd1-4f99-8417-e208ef75a82e\") " pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.846839 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdlbq\" (UniqueName: \"kubernetes.io/projected/d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7-kube-api-access-zdlbq\") pod \"nmstate-console-plugin-7754f76f8b-t99rk\" (UID: \"d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.846872 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/68a91a5f-abd1-4f99-8417-e208ef75a82e-dbus-socket\") pod \"nmstate-handler-9bxhq\" (UID: \"68a91a5f-abd1-4f99-8417-e208ef75a82e\") " pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.846917 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-t99rk\" (UID: \"d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.846941 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t99rk\" (UID: \"d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.846978 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhszq\" (UniqueName: \"kubernetes.io/projected/048078df-7c17-42ac-96bd-ddcbe64854d3-kube-api-access-xhszq\") pod \"nmstate-webhook-8474b5b9d8-fxx9l\" (UID: \"048078df-7c17-42ac-96bd-ddcbe64854d3\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.847540 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/68a91a5f-abd1-4f99-8417-e208ef75a82e-dbus-socket\") pod \"nmstate-handler-9bxhq\" (UID: \"68a91a5f-abd1-4f99-8417-e208ef75a82e\") " pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.847553 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/68a91a5f-abd1-4f99-8417-e208ef75a82e-ovs-socket\") pod \"nmstate-handler-9bxhq\" (UID: \"68a91a5f-abd1-4f99-8417-e208ef75a82e\") " pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.847580 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/68a91a5f-abd1-4f99-8417-e208ef75a82e-nmstate-lock\") pod \"nmstate-handler-9bxhq\" (UID: \"68a91a5f-abd1-4f99-8417-e208ef75a82e\") " pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.852280 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/048078df-7c17-42ac-96bd-ddcbe64854d3-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fxx9l\" (UID: \"048078df-7c17-42ac-96bd-ddcbe64854d3\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.873751 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhszq\" (UniqueName: \"kubernetes.io/projected/048078df-7c17-42ac-96bd-ddcbe64854d3-kube-api-access-xhszq\") pod \"nmstate-webhook-8474b5b9d8-fxx9l\" (UID: \"048078df-7c17-42ac-96bd-ddcbe64854d3\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.876657 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m4kb\" (UniqueName: \"kubernetes.io/projected/68a91a5f-abd1-4f99-8417-e208ef75a82e-kube-api-access-8m4kb\") pod \"nmstate-handler-9bxhq\" (UID: \"68a91a5f-abd1-4f99-8417-e208ef75a82e\") " pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.879237 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vkqdm" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.893663 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.919640 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-755f69c744-74wsp"] Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.920443 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.932146 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-755f69c744-74wsp"] Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.944378 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.949378 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdlbq\" (UniqueName: \"kubernetes.io/projected/d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7-kube-api-access-zdlbq\") pod \"nmstate-console-plugin-7754f76f8b-t99rk\" (UID: \"d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.949474 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-t99rk\" (UID: \"d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.949503 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t99rk\" (UID: \"d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" Jan 31 05:34:43 crc kubenswrapper[5050]: E0131 05:34:43.949628 5050 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 31 05:34:43 crc kubenswrapper[5050]: E0131 05:34:43.949707 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7-plugin-serving-cert podName:d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7 nodeName:}" failed. No retries permitted until 2026-01-31 05:34:44.449688506 +0000 UTC m=+809.498850112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-t99rk" (UID: "d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7") : secret "plugin-serving-cert" not found Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.950527 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-t99rk\" (UID: \"d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" Jan 31 05:34:43 crc kubenswrapper[5050]: I0131 05:34:43.981778 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdlbq\" (UniqueName: \"kubernetes.io/projected/d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7-kube-api-access-zdlbq\") pod \"nmstate-console-plugin-7754f76f8b-t99rk\" (UID: \"d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.050680 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0855fb8c-377c-4b89-b158-3e4a3b300e9c-console-oauth-config\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.051200 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0855fb8c-377c-4b89-b158-3e4a3b300e9c-console-config\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.051242 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0855fb8c-377c-4b89-b158-3e4a3b300e9c-oauth-serving-cert\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.051291 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0855fb8c-377c-4b89-b158-3e4a3b300e9c-service-ca\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.051313 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0855fb8c-377c-4b89-b158-3e4a3b300e9c-trusted-ca-bundle\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.051338 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0855fb8c-377c-4b89-b158-3e4a3b300e9c-console-serving-cert\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.051361 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgnzf\" (UniqueName: \"kubernetes.io/projected/0855fb8c-377c-4b89-b158-3e4a3b300e9c-kube-api-access-xgnzf\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.153199 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0855fb8c-377c-4b89-b158-3e4a3b300e9c-console-oauth-config\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.153264 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0855fb8c-377c-4b89-b158-3e4a3b300e9c-console-config\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.153310 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0855fb8c-377c-4b89-b158-3e4a3b300e9c-oauth-serving-cert\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.153356 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0855fb8c-377c-4b89-b158-3e4a3b300e9c-service-ca\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.153375 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0855fb8c-377c-4b89-b158-3e4a3b300e9c-trusted-ca-bundle\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.153395 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0855fb8c-377c-4b89-b158-3e4a3b300e9c-console-serving-cert\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.153426 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgnzf\" (UniqueName: \"kubernetes.io/projected/0855fb8c-377c-4b89-b158-3e4a3b300e9c-kube-api-access-xgnzf\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.154611 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0855fb8c-377c-4b89-b158-3e4a3b300e9c-oauth-serving-cert\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.154615 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0855fb8c-377c-4b89-b158-3e4a3b300e9c-service-ca\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.154664 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0855fb8c-377c-4b89-b158-3e4a3b300e9c-console-config\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.154970 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0855fb8c-377c-4b89-b158-3e4a3b300e9c-trusted-ca-bundle\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.157238 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vkqdm"] Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.160968 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0855fb8c-377c-4b89-b158-3e4a3b300e9c-console-oauth-config\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.161181 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0855fb8c-377c-4b89-b158-3e4a3b300e9c-console-serving-cert\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.171591 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgnzf\" (UniqueName: \"kubernetes.io/projected/0855fb8c-377c-4b89-b158-3e4a3b300e9c-kube-api-access-xgnzf\") pod \"console-755f69c744-74wsp\" (UID: \"0855fb8c-377c-4b89-b158-3e4a3b300e9c\") " pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.200235 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l"] Jan 31 05:34:44 crc kubenswrapper[5050]: W0131 05:34:44.200703 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod048078df_7c17_42ac_96bd_ddcbe64854d3.slice/crio-48595675c41c330df3e80087adedc9c4a3fb27f7243b692743e6f0a024db43f1 WatchSource:0}: Error finding container 48595675c41c330df3e80087adedc9c4a3fb27f7243b692743e6f0a024db43f1: Status 404 returned error can't find the container with id 48595675c41c330df3e80087adedc9c4a3fb27f7243b692743e6f0a024db43f1 Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.258515 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.459711 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t99rk\" (UID: \"d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.462516 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-755f69c744-74wsp"] Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.465927 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t99rk\" (UID: \"d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" Jan 31 05:34:44 crc kubenswrapper[5050]: W0131 05:34:44.468189 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0855fb8c_377c_4b89_b158_3e4a3b300e9c.slice/crio-aa8429f1bdf6fc8314459ce83e6aa4711a40922af4c1ae4cdfbc85f583047be8 WatchSource:0}: Error finding container aa8429f1bdf6fc8314459ce83e6aa4711a40922af4c1ae4cdfbc85f583047be8: Status 404 returned error can't find the container with id aa8429f1bdf6fc8314459ce83e6aa4711a40922af4c1ae4cdfbc85f583047be8 Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.598596 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9bxhq" event={"ID":"68a91a5f-abd1-4f99-8417-e208ef75a82e","Type":"ContainerStarted","Data":"8f361c4a62a74f92376c0f5d8ce6cbc4cda368cba3a5be0f1b4a4a7d3db0dc71"} Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.599763 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-755f69c744-74wsp" event={"ID":"0855fb8c-377c-4b89-b158-3e4a3b300e9c","Type":"ContainerStarted","Data":"aa8429f1bdf6fc8314459ce83e6aa4711a40922af4c1ae4cdfbc85f583047be8"} Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.601348 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vkqdm" event={"ID":"fb76c612-cfce-47d5-adaa-d7b10661b9ca","Type":"ContainerStarted","Data":"f120e9cf8207da8c21e8aa5afb5e1f005b0878e2bf833393360a5e525d79f660"} Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.602394 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" event={"ID":"048078df-7c17-42ac-96bd-ddcbe64854d3","Type":"ContainerStarted","Data":"48595675c41c330df3e80087adedc9c4a3fb27f7243b692743e6f0a024db43f1"} Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.641899 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" Jan 31 05:34:44 crc kubenswrapper[5050]: I0131 05:34:44.900993 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk"] Jan 31 05:34:45 crc kubenswrapper[5050]: I0131 05:34:45.609508 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-755f69c744-74wsp" event={"ID":"0855fb8c-377c-4b89-b158-3e4a3b300e9c","Type":"ContainerStarted","Data":"8164a8b6a62a1ebeaa80b2f569d00fe14b3d9992552b5d2ce51dc69f06818e19"} Jan 31 05:34:45 crc kubenswrapper[5050]: I0131 05:34:45.613068 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" event={"ID":"d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7","Type":"ContainerStarted","Data":"932d18249afad167f1433614e3144b11f3ea7e1c23e219a5129ab7ab022324ee"} Jan 31 05:34:45 crc kubenswrapper[5050]: I0131 05:34:45.631024 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-755f69c744-74wsp" podStartSLOduration=2.63100786 podStartE2EDuration="2.63100786s" podCreationTimestamp="2026-01-31 05:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:34:45.629292855 +0000 UTC m=+810.678454451" watchObservedRunningTime="2026-01-31 05:34:45.63100786 +0000 UTC m=+810.680169456" Jan 31 05:34:45 crc kubenswrapper[5050]: I0131 05:34:45.902817 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:45 crc kubenswrapper[5050]: I0131 05:34:45.957284 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:46 crc kubenswrapper[5050]: I0131 05:34:46.135042 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7jx94"] Jan 31 05:34:47 crc kubenswrapper[5050]: I0131 05:34:47.634659 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vkqdm" event={"ID":"fb76c612-cfce-47d5-adaa-d7b10661b9ca","Type":"ContainerStarted","Data":"f55a6ceb0a4bdcdd837d443810a562818156785a06e4a0b2d4d4619a15c89c38"} Jan 31 05:34:47 crc kubenswrapper[5050]: I0131 05:34:47.640172 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" event={"ID":"048078df-7c17-42ac-96bd-ddcbe64854d3","Type":"ContainerStarted","Data":"d105604747bce403870f8aa51f69cdfea01a73ba9beac666e529e7b65c22114d"} Jan 31 05:34:47 crc kubenswrapper[5050]: I0131 05:34:47.644348 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7jx94" podUID="163b82b4-21b0-4c02-b09e-7985bc08fa11" containerName="registry-server" containerID="cri-o://60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53" gracePeriod=2 Jan 31 05:34:47 crc kubenswrapper[5050]: I0131 05:34:47.644760 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9bxhq" event={"ID":"68a91a5f-abd1-4f99-8417-e208ef75a82e","Type":"ContainerStarted","Data":"dc11ba52a5c0ebaf429d22a7fefd405f30277e67c71b23d00c8cb025f4432044"} Jan 31 05:34:47 crc kubenswrapper[5050]: I0131 05:34:47.644859 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:47 crc kubenswrapper[5050]: I0131 05:34:47.672397 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" podStartSLOduration=2.198113887 podStartE2EDuration="4.672364307s" podCreationTimestamp="2026-01-31 05:34:43 +0000 UTC" firstStartedPulling="2026-01-31 05:34:44.202653064 +0000 UTC m=+809.251814660" lastFinishedPulling="2026-01-31 05:34:46.676903474 +0000 UTC m=+811.726065080" observedRunningTime="2026-01-31 05:34:47.661737718 +0000 UTC m=+812.710899354" watchObservedRunningTime="2026-01-31 05:34:47.672364307 +0000 UTC m=+812.721525913" Jan 31 05:34:47 crc kubenswrapper[5050]: I0131 05:34:47.716590 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9bxhq" podStartSLOduration=2.032196889 podStartE2EDuration="4.716555214s" podCreationTimestamp="2026-01-31 05:34:43 +0000 UTC" firstStartedPulling="2026-01-31 05:34:44.002739586 +0000 UTC m=+809.051901182" lastFinishedPulling="2026-01-31 05:34:46.687097911 +0000 UTC m=+811.736259507" observedRunningTime="2026-01-31 05:34:47.698496221 +0000 UTC m=+812.747657887" watchObservedRunningTime="2026-01-31 05:34:47.716555214 +0000 UTC m=+812.765716820" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.013350 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.108812 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163b82b4-21b0-4c02-b09e-7985bc08fa11-catalog-content\") pod \"163b82b4-21b0-4c02-b09e-7985bc08fa11\" (UID: \"163b82b4-21b0-4c02-b09e-7985bc08fa11\") " Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.108912 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgslt\" (UniqueName: \"kubernetes.io/projected/163b82b4-21b0-4c02-b09e-7985bc08fa11-kube-api-access-jgslt\") pod \"163b82b4-21b0-4c02-b09e-7985bc08fa11\" (UID: \"163b82b4-21b0-4c02-b09e-7985bc08fa11\") " Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.109069 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163b82b4-21b0-4c02-b09e-7985bc08fa11-utilities\") pod \"163b82b4-21b0-4c02-b09e-7985bc08fa11\" (UID: \"163b82b4-21b0-4c02-b09e-7985bc08fa11\") " Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.110308 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/163b82b4-21b0-4c02-b09e-7985bc08fa11-utilities" (OuterVolumeSpecName: "utilities") pod "163b82b4-21b0-4c02-b09e-7985bc08fa11" (UID: "163b82b4-21b0-4c02-b09e-7985bc08fa11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.132044 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163b82b4-21b0-4c02-b09e-7985bc08fa11-kube-api-access-jgslt" (OuterVolumeSpecName: "kube-api-access-jgslt") pod "163b82b4-21b0-4c02-b09e-7985bc08fa11" (UID: "163b82b4-21b0-4c02-b09e-7985bc08fa11"). InnerVolumeSpecName "kube-api-access-jgslt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.210839 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163b82b4-21b0-4c02-b09e-7985bc08fa11-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.211307 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgslt\" (UniqueName: \"kubernetes.io/projected/163b82b4-21b0-4c02-b09e-7985bc08fa11-kube-api-access-jgslt\") on node \"crc\" DevicePath \"\"" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.251017 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/163b82b4-21b0-4c02-b09e-7985bc08fa11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "163b82b4-21b0-4c02-b09e-7985bc08fa11" (UID: "163b82b4-21b0-4c02-b09e-7985bc08fa11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.312309 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163b82b4-21b0-4c02-b09e-7985bc08fa11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.659400 5050 generic.go:334] "Generic (PLEG): container finished" podID="163b82b4-21b0-4c02-b09e-7985bc08fa11" containerID="60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53" exitCode=0 Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.659464 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jx94" event={"ID":"163b82b4-21b0-4c02-b09e-7985bc08fa11","Type":"ContainerDied","Data":"60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53"} Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.659494 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jx94" event={"ID":"163b82b4-21b0-4c02-b09e-7985bc08fa11","Type":"ContainerDied","Data":"2dd52b6c5f9d0aaf4189e64b856282359bc7dcf5157ff01d1346380af30b5a9c"} Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.659511 5050 scope.go:117] "RemoveContainer" containerID="60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.661313 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jx94" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.663063 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" event={"ID":"d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7","Type":"ContainerStarted","Data":"baea14bf37f8a3b4b4fc0012861ba1e5401dcc7259011ff0a8a968a9ee36b2d2"} Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.663227 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.680360 5050 scope.go:117] "RemoveContainer" containerID="515703a64cb1a1e50ca3e5534f2fa7dadb5eee5efa9441b0e7c54187f7d2cc66" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.688358 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t99rk" podStartSLOduration=2.983749703 podStartE2EDuration="5.688335467s" podCreationTimestamp="2026-01-31 05:34:43 +0000 UTC" firstStartedPulling="2026-01-31 05:34:44.912391281 +0000 UTC m=+809.961552877" lastFinishedPulling="2026-01-31 05:34:47.616977045 +0000 UTC m=+812.666138641" observedRunningTime="2026-01-31 05:34:48.686194281 +0000 UTC m=+813.735355887" watchObservedRunningTime="2026-01-31 05:34:48.688335467 +0000 UTC m=+813.737497063" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.724328 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7jx94"] Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.728451 5050 scope.go:117] "RemoveContainer" containerID="332f361093f40649641e6ccaa9cf5e251110f9fedcfb3bafa25c68b9c436750e" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.741139 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7jx94"] Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.754068 5050 scope.go:117] "RemoveContainer" containerID="60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53" Jan 31 05:34:48 crc kubenswrapper[5050]: E0131 05:34:48.754659 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53\": container with ID starting with 60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53 not found: ID does not exist" containerID="60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.754734 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53"} err="failed to get container status \"60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53\": rpc error: code = NotFound desc = could not find container \"60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53\": container with ID starting with 60ff34c382d4f8f980c59c6e6c23218996482f0d6be360a5c43f67cd5a1f8a53 not found: ID does not exist" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.754787 5050 scope.go:117] "RemoveContainer" containerID="515703a64cb1a1e50ca3e5534f2fa7dadb5eee5efa9441b0e7c54187f7d2cc66" Jan 31 05:34:48 crc kubenswrapper[5050]: E0131 05:34:48.755302 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515703a64cb1a1e50ca3e5534f2fa7dadb5eee5efa9441b0e7c54187f7d2cc66\": container with ID starting with 515703a64cb1a1e50ca3e5534f2fa7dadb5eee5efa9441b0e7c54187f7d2cc66 not found: ID does not exist" containerID="515703a64cb1a1e50ca3e5534f2fa7dadb5eee5efa9441b0e7c54187f7d2cc66" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.755345 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515703a64cb1a1e50ca3e5534f2fa7dadb5eee5efa9441b0e7c54187f7d2cc66"} err="failed to get container status \"515703a64cb1a1e50ca3e5534f2fa7dadb5eee5efa9441b0e7c54187f7d2cc66\": rpc error: code = NotFound desc = could not find container \"515703a64cb1a1e50ca3e5534f2fa7dadb5eee5efa9441b0e7c54187f7d2cc66\": container with ID starting with 515703a64cb1a1e50ca3e5534f2fa7dadb5eee5efa9441b0e7c54187f7d2cc66 not found: ID does not exist" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.755373 5050 scope.go:117] "RemoveContainer" containerID="332f361093f40649641e6ccaa9cf5e251110f9fedcfb3bafa25c68b9c436750e" Jan 31 05:34:48 crc kubenswrapper[5050]: E0131 05:34:48.755926 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"332f361093f40649641e6ccaa9cf5e251110f9fedcfb3bafa25c68b9c436750e\": container with ID starting with 332f361093f40649641e6ccaa9cf5e251110f9fedcfb3bafa25c68b9c436750e not found: ID does not exist" containerID="332f361093f40649641e6ccaa9cf5e251110f9fedcfb3bafa25c68b9c436750e" Jan 31 05:34:48 crc kubenswrapper[5050]: I0131 05:34:48.755977 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332f361093f40649641e6ccaa9cf5e251110f9fedcfb3bafa25c68b9c436750e"} err="failed to get container status \"332f361093f40649641e6ccaa9cf5e251110f9fedcfb3bafa25c68b9c436750e\": rpc error: code = NotFound desc = could not find container \"332f361093f40649641e6ccaa9cf5e251110f9fedcfb3bafa25c68b9c436750e\": container with ID starting with 332f361093f40649641e6ccaa9cf5e251110f9fedcfb3bafa25c68b9c436750e not found: ID does not exist" Jan 31 05:34:49 crc kubenswrapper[5050]: I0131 05:34:49.759065 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="163b82b4-21b0-4c02-b09e-7985bc08fa11" path="/var/lib/kubelet/pods/163b82b4-21b0-4c02-b09e-7985bc08fa11/volumes" Jan 31 05:34:50 crc kubenswrapper[5050]: I0131 05:34:50.681703 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vkqdm" event={"ID":"fb76c612-cfce-47d5-adaa-d7b10661b9ca","Type":"ContainerStarted","Data":"7b71fd4cb073dc1b4471b53391e7eb57d582f75e580818f3577e11ad6a4de21d"} Jan 31 05:34:50 crc kubenswrapper[5050]: I0131 05:34:50.715213 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-vkqdm" podStartSLOduration=1.376926831 podStartE2EDuration="7.715184285s" podCreationTimestamp="2026-01-31 05:34:43 +0000 UTC" firstStartedPulling="2026-01-31 05:34:44.163670583 +0000 UTC m=+809.212832179" lastFinishedPulling="2026-01-31 05:34:50.501928037 +0000 UTC m=+815.551089633" observedRunningTime="2026-01-31 05:34:50.705136271 +0000 UTC m=+815.754297897" watchObservedRunningTime="2026-01-31 05:34:50.715184285 +0000 UTC m=+815.764345931" Jan 31 05:34:53 crc kubenswrapper[5050]: I0131 05:34:53.978666 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9bxhq" Jan 31 05:34:54 crc kubenswrapper[5050]: I0131 05:34:54.259611 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:54 crc kubenswrapper[5050]: I0131 05:34:54.259827 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:54 crc kubenswrapper[5050]: I0131 05:34:54.267776 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:54 crc kubenswrapper[5050]: I0131 05:34:54.721065 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-755f69c744-74wsp" Jan 31 05:34:54 crc kubenswrapper[5050]: I0131 05:34:54.775913 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fk4vq"] Jan 31 05:35:03 crc kubenswrapper[5050]: I0131 05:35:03.905117 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fxx9l" Jan 31 05:35:19 crc kubenswrapper[5050]: I0131 05:35:19.836216 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fk4vq" podUID="dab2d02c-8e81-40c5-a5ca-98be1833702e" containerName="console" containerID="cri-o://ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811" gracePeriod=15 Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.066079 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl"] Jan 31 05:35:20 crc kubenswrapper[5050]: E0131 05:35:20.066674 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163b82b4-21b0-4c02-b09e-7985bc08fa11" containerName="registry-server" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.066696 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="163b82b4-21b0-4c02-b09e-7985bc08fa11" containerName="registry-server" Jan 31 05:35:20 crc kubenswrapper[5050]: E0131 05:35:20.066729 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163b82b4-21b0-4c02-b09e-7985bc08fa11" containerName="extract-utilities" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.066742 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="163b82b4-21b0-4c02-b09e-7985bc08fa11" containerName="extract-utilities" Jan 31 05:35:20 crc kubenswrapper[5050]: E0131 05:35:20.066763 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163b82b4-21b0-4c02-b09e-7985bc08fa11" containerName="extract-content" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.066775 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="163b82b4-21b0-4c02-b09e-7985bc08fa11" containerName="extract-content" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.066994 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="163b82b4-21b0-4c02-b09e-7985bc08fa11" containerName="registry-server" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.068190 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.070304 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.081541 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl"] Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.222020 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl\" (UID: \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.222071 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl\" (UID: \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.222089 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlpg4\" (UniqueName: \"kubernetes.io/projected/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-kube-api-access-mlpg4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl\" (UID: \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.309482 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fk4vq_dab2d02c-8e81-40c5-a5ca-98be1833702e/console/0.log" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.309575 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.322805 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl\" (UID: \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.322850 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl\" (UID: \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.322871 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlpg4\" (UniqueName: \"kubernetes.io/projected/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-kube-api-access-mlpg4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl\" (UID: \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.323662 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl\" (UID: \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.323727 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl\" (UID: \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.346940 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlpg4\" (UniqueName: \"kubernetes.io/projected/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-kube-api-access-mlpg4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl\" (UID: \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.425232 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-oauth-config\") pod \"dab2d02c-8e81-40c5-a5ca-98be1833702e\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.425354 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-serving-cert\") pod \"dab2d02c-8e81-40c5-a5ca-98be1833702e\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.425419 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkt76\" (UniqueName: \"kubernetes.io/projected/dab2d02c-8e81-40c5-a5ca-98be1833702e-kube-api-access-kkt76\") pod \"dab2d02c-8e81-40c5-a5ca-98be1833702e\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.425475 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-config\") pod \"dab2d02c-8e81-40c5-a5ca-98be1833702e\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.425517 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-oauth-serving-cert\") pod \"dab2d02c-8e81-40c5-a5ca-98be1833702e\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.425551 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-trusted-ca-bundle\") pod \"dab2d02c-8e81-40c5-a5ca-98be1833702e\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.425586 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-service-ca\") pod \"dab2d02c-8e81-40c5-a5ca-98be1833702e\" (UID: \"dab2d02c-8e81-40c5-a5ca-98be1833702e\") " Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.426561 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "dab2d02c-8e81-40c5-a5ca-98be1833702e" (UID: "dab2d02c-8e81-40c5-a5ca-98be1833702e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.426593 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-config" (OuterVolumeSpecName: "console-config") pod "dab2d02c-8e81-40c5-a5ca-98be1833702e" (UID: "dab2d02c-8e81-40c5-a5ca-98be1833702e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.427304 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dab2d02c-8e81-40c5-a5ca-98be1833702e" (UID: "dab2d02c-8e81-40c5-a5ca-98be1833702e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.428573 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-service-ca" (OuterVolumeSpecName: "service-ca") pod "dab2d02c-8e81-40c5-a5ca-98be1833702e" (UID: "dab2d02c-8e81-40c5-a5ca-98be1833702e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.432263 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "dab2d02c-8e81-40c5-a5ca-98be1833702e" (UID: "dab2d02c-8e81-40c5-a5ca-98be1833702e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.432345 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab2d02c-8e81-40c5-a5ca-98be1833702e-kube-api-access-kkt76" (OuterVolumeSpecName: "kube-api-access-kkt76") pod "dab2d02c-8e81-40c5-a5ca-98be1833702e" (UID: "dab2d02c-8e81-40c5-a5ca-98be1833702e"). InnerVolumeSpecName "kube-api-access-kkt76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.432770 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.434468 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "dab2d02c-8e81-40c5-a5ca-98be1833702e" (UID: "dab2d02c-8e81-40c5-a5ca-98be1833702e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.527910 5050 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.528492 5050 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.528523 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkt76\" (UniqueName: \"kubernetes.io/projected/dab2d02c-8e81-40c5-a5ca-98be1833702e-kube-api-access-kkt76\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.528625 5050 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.528647 5050 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.528664 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.528681 5050 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dab2d02c-8e81-40c5-a5ca-98be1833702e-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.626139 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl"] Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.895092 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fk4vq_dab2d02c-8e81-40c5-a5ca-98be1833702e/console/0.log" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.895131 5050 generic.go:334] "Generic (PLEG): container finished" podID="dab2d02c-8e81-40c5-a5ca-98be1833702e" containerID="ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811" exitCode=2 Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.895174 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fk4vq" event={"ID":"dab2d02c-8e81-40c5-a5ca-98be1833702e","Type":"ContainerDied","Data":"ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811"} Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.895202 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fk4vq" event={"ID":"dab2d02c-8e81-40c5-a5ca-98be1833702e","Type":"ContainerDied","Data":"28341ece364e875241f029a1fbb844c33c8b5db200de72a07f402ee1b4e93879"} Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.895218 5050 scope.go:117] "RemoveContainer" containerID="ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.895310 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fk4vq" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.900022 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" event={"ID":"3e3a55b6-6044-420d-8d5a-2dd94a073cbd","Type":"ContainerStarted","Data":"b0a3890ff68a3e4a2ed8d368b0df18a1ec95e59dc6a707e2ef3bee214d6da094"} Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.900085 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" event={"ID":"3e3a55b6-6044-420d-8d5a-2dd94a073cbd","Type":"ContainerStarted","Data":"ee3c9aafe015821363eff33ef98bfce1b4b3dcf6229b216aea740dcb4d258f71"} Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.918702 5050 scope.go:117] "RemoveContainer" containerID="ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811" Jan 31 05:35:20 crc kubenswrapper[5050]: E0131 05:35:20.921856 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811\": container with ID starting with ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811 not found: ID does not exist" containerID="ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.921917 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811"} err="failed to get container status \"ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811\": rpc error: code = NotFound desc = could not find container \"ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811\": container with ID starting with ee16784cdcfc88a790558ea056688f2b19cda2d98dbadd73311c34edfc622811 not found: ID does not exist" Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.925645 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fk4vq"] Jan 31 05:35:20 crc kubenswrapper[5050]: I0131 05:35:20.930548 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fk4vq"] Jan 31 05:35:21 crc kubenswrapper[5050]: I0131 05:35:21.755725 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab2d02c-8e81-40c5-a5ca-98be1833702e" path="/var/lib/kubelet/pods/dab2d02c-8e81-40c5-a5ca-98be1833702e/volumes" Jan 31 05:35:21 crc kubenswrapper[5050]: I0131 05:35:21.911491 5050 generic.go:334] "Generic (PLEG): container finished" podID="3e3a55b6-6044-420d-8d5a-2dd94a073cbd" containerID="b0a3890ff68a3e4a2ed8d368b0df18a1ec95e59dc6a707e2ef3bee214d6da094" exitCode=0 Jan 31 05:35:21 crc kubenswrapper[5050]: I0131 05:35:21.911626 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" event={"ID":"3e3a55b6-6044-420d-8d5a-2dd94a073cbd","Type":"ContainerDied","Data":"b0a3890ff68a3e4a2ed8d368b0df18a1ec95e59dc6a707e2ef3bee214d6da094"} Jan 31 05:35:23 crc kubenswrapper[5050]: I0131 05:35:23.935191 5050 generic.go:334] "Generic (PLEG): container finished" podID="3e3a55b6-6044-420d-8d5a-2dd94a073cbd" containerID="924d59de553d553002c496ee8ce0c38fdfb55bccca9e9f5a70a776cff0ee0c47" exitCode=0 Jan 31 05:35:23 crc kubenswrapper[5050]: I0131 05:35:23.935279 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" event={"ID":"3e3a55b6-6044-420d-8d5a-2dd94a073cbd","Type":"ContainerDied","Data":"924d59de553d553002c496ee8ce0c38fdfb55bccca9e9f5a70a776cff0ee0c47"} Jan 31 05:35:24 crc kubenswrapper[5050]: I0131 05:35:24.948925 5050 generic.go:334] "Generic (PLEG): container finished" podID="3e3a55b6-6044-420d-8d5a-2dd94a073cbd" containerID="45df2b54442d9eae886285b193eebd6a028b87425c2cedfcbb5d2999d80cf81f" exitCode=0 Jan 31 05:35:24 crc kubenswrapper[5050]: I0131 05:35:24.949068 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" event={"ID":"3e3a55b6-6044-420d-8d5a-2dd94a073cbd","Type":"ContainerDied","Data":"45df2b54442d9eae886285b193eebd6a028b87425c2cedfcbb5d2999d80cf81f"} Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.301187 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.419389 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-bundle\") pod \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\" (UID: \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\") " Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.419801 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlpg4\" (UniqueName: \"kubernetes.io/projected/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-kube-api-access-mlpg4\") pod \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\" (UID: \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\") " Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.420174 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-util\") pod \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\" (UID: \"3e3a55b6-6044-420d-8d5a-2dd94a073cbd\") " Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.426753 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-bundle" (OuterVolumeSpecName: "bundle") pod "3e3a55b6-6044-420d-8d5a-2dd94a073cbd" (UID: "3e3a55b6-6044-420d-8d5a-2dd94a073cbd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.430072 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-kube-api-access-mlpg4" (OuterVolumeSpecName: "kube-api-access-mlpg4") pod "3e3a55b6-6044-420d-8d5a-2dd94a073cbd" (UID: "3e3a55b6-6044-420d-8d5a-2dd94a073cbd"). InnerVolumeSpecName "kube-api-access-mlpg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.522549 5050 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.522657 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlpg4\" (UniqueName: \"kubernetes.io/projected/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-kube-api-access-mlpg4\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.704886 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-util" (OuterVolumeSpecName: "util") pod "3e3a55b6-6044-420d-8d5a-2dd94a073cbd" (UID: "3e3a55b6-6044-420d-8d5a-2dd94a073cbd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.725472 5050 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e3a55b6-6044-420d-8d5a-2dd94a073cbd-util\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.968627 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" event={"ID":"3e3a55b6-6044-420d-8d5a-2dd94a073cbd","Type":"ContainerDied","Data":"ee3c9aafe015821363eff33ef98bfce1b4b3dcf6229b216aea740dcb4d258f71"} Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.968687 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee3c9aafe015821363eff33ef98bfce1b4b3dcf6229b216aea740dcb4d258f71" Jan 31 05:35:26 crc kubenswrapper[5050]: I0131 05:35:26.968777 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.216580 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8"] Jan 31 05:35:35 crc kubenswrapper[5050]: E0131 05:35:35.217394 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3a55b6-6044-420d-8d5a-2dd94a073cbd" containerName="util" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.217410 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3a55b6-6044-420d-8d5a-2dd94a073cbd" containerName="util" Jan 31 05:35:35 crc kubenswrapper[5050]: E0131 05:35:35.217426 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab2d02c-8e81-40c5-a5ca-98be1833702e" containerName="console" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.217435 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab2d02c-8e81-40c5-a5ca-98be1833702e" containerName="console" Jan 31 05:35:35 crc kubenswrapper[5050]: E0131 05:35:35.217445 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3a55b6-6044-420d-8d5a-2dd94a073cbd" containerName="extract" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.217453 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3a55b6-6044-420d-8d5a-2dd94a073cbd" containerName="extract" Jan 31 05:35:35 crc kubenswrapper[5050]: E0131 05:35:35.217467 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3a55b6-6044-420d-8d5a-2dd94a073cbd" containerName="pull" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.217474 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3a55b6-6044-420d-8d5a-2dd94a073cbd" containerName="pull" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.217594 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab2d02c-8e81-40c5-a5ca-98be1833702e" containerName="console" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.217615 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3a55b6-6044-420d-8d5a-2dd94a073cbd" containerName="extract" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.218114 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.221894 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.222220 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.222380 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-5r5k5" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.222407 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.223277 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.249097 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8"] Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.386919 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qssnw\" (UniqueName: \"kubernetes.io/projected/e0cbccec-0abb-496b-99f5-3dc3e2f884a9-kube-api-access-qssnw\") pod \"metallb-operator-controller-manager-699dfcf9bf-482s8\" (UID: \"e0cbccec-0abb-496b-99f5-3dc3e2f884a9\") " pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.387123 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0cbccec-0abb-496b-99f5-3dc3e2f884a9-webhook-cert\") pod \"metallb-operator-controller-manager-699dfcf9bf-482s8\" (UID: \"e0cbccec-0abb-496b-99f5-3dc3e2f884a9\") " pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.387180 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0cbccec-0abb-496b-99f5-3dc3e2f884a9-apiservice-cert\") pod \"metallb-operator-controller-manager-699dfcf9bf-482s8\" (UID: \"e0cbccec-0abb-496b-99f5-3dc3e2f884a9\") " pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.484625 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m"] Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.485338 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.487589 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a311a66-2a17-4fa9-8da1-1910cca8d327-apiservice-cert\") pod \"metallb-operator-webhook-server-5d49b744cb-vrv8m\" (UID: \"6a311a66-2a17-4fa9-8da1-1910cca8d327\") " pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.487644 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0cbccec-0abb-496b-99f5-3dc3e2f884a9-webhook-cert\") pod \"metallb-operator-controller-manager-699dfcf9bf-482s8\" (UID: \"e0cbccec-0abb-496b-99f5-3dc3e2f884a9\") " pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.487668 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a311a66-2a17-4fa9-8da1-1910cca8d327-webhook-cert\") pod \"metallb-operator-webhook-server-5d49b744cb-vrv8m\" (UID: \"6a311a66-2a17-4fa9-8da1-1910cca8d327\") " pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.487705 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0cbccec-0abb-496b-99f5-3dc3e2f884a9-apiservice-cert\") pod \"metallb-operator-controller-manager-699dfcf9bf-482s8\" (UID: \"e0cbccec-0abb-496b-99f5-3dc3e2f884a9\") " pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.487728 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qssnw\" (UniqueName: \"kubernetes.io/projected/e0cbccec-0abb-496b-99f5-3dc3e2f884a9-kube-api-access-qssnw\") pod \"metallb-operator-controller-manager-699dfcf9bf-482s8\" (UID: \"e0cbccec-0abb-496b-99f5-3dc3e2f884a9\") " pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.487836 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5rkm\" (UniqueName: \"kubernetes.io/projected/6a311a66-2a17-4fa9-8da1-1910cca8d327-kube-api-access-h5rkm\") pod \"metallb-operator-webhook-server-5d49b744cb-vrv8m\" (UID: \"6a311a66-2a17-4fa9-8da1-1910cca8d327\") " pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.490165 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.494295 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-mfjbz" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.494387 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.495037 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0cbccec-0abb-496b-99f5-3dc3e2f884a9-webhook-cert\") pod \"metallb-operator-controller-manager-699dfcf9bf-482s8\" (UID: \"e0cbccec-0abb-496b-99f5-3dc3e2f884a9\") " pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.495866 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0cbccec-0abb-496b-99f5-3dc3e2f884a9-apiservice-cert\") pod \"metallb-operator-controller-manager-699dfcf9bf-482s8\" (UID: \"e0cbccec-0abb-496b-99f5-3dc3e2f884a9\") " pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.509044 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m"] Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.514433 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qssnw\" (UniqueName: \"kubernetes.io/projected/e0cbccec-0abb-496b-99f5-3dc3e2f884a9-kube-api-access-qssnw\") pod \"metallb-operator-controller-manager-699dfcf9bf-482s8\" (UID: \"e0cbccec-0abb-496b-99f5-3dc3e2f884a9\") " pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.535452 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.589063 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5rkm\" (UniqueName: \"kubernetes.io/projected/6a311a66-2a17-4fa9-8da1-1910cca8d327-kube-api-access-h5rkm\") pod \"metallb-operator-webhook-server-5d49b744cb-vrv8m\" (UID: \"6a311a66-2a17-4fa9-8da1-1910cca8d327\") " pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.589419 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a311a66-2a17-4fa9-8da1-1910cca8d327-apiservice-cert\") pod \"metallb-operator-webhook-server-5d49b744cb-vrv8m\" (UID: \"6a311a66-2a17-4fa9-8da1-1910cca8d327\") " pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.589448 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a311a66-2a17-4fa9-8da1-1910cca8d327-webhook-cert\") pod \"metallb-operator-webhook-server-5d49b744cb-vrv8m\" (UID: \"6a311a66-2a17-4fa9-8da1-1910cca8d327\") " pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.594552 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a311a66-2a17-4fa9-8da1-1910cca8d327-webhook-cert\") pod \"metallb-operator-webhook-server-5d49b744cb-vrv8m\" (UID: \"6a311a66-2a17-4fa9-8da1-1910cca8d327\") " pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.594567 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a311a66-2a17-4fa9-8da1-1910cca8d327-apiservice-cert\") pod \"metallb-operator-webhook-server-5d49b744cb-vrv8m\" (UID: \"6a311a66-2a17-4fa9-8da1-1910cca8d327\") " pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.617634 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5rkm\" (UniqueName: \"kubernetes.io/projected/6a311a66-2a17-4fa9-8da1-1910cca8d327-kube-api-access-h5rkm\") pod \"metallb-operator-webhook-server-5d49b744cb-vrv8m\" (UID: \"6a311a66-2a17-4fa9-8da1-1910cca8d327\") " pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.753321 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8"] Jan 31 05:35:35 crc kubenswrapper[5050]: W0131 05:35:35.759247 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0cbccec_0abb_496b_99f5_3dc3e2f884a9.slice/crio-9349c23a5fd99ce78371164b84cb8187bd3363ff35752766d3bad622e4f0f1a8 WatchSource:0}: Error finding container 9349c23a5fd99ce78371164b84cb8187bd3363ff35752766d3bad622e4f0f1a8: Status 404 returned error can't find the container with id 9349c23a5fd99ce78371164b84cb8187bd3363ff35752766d3bad622e4f0f1a8 Jan 31 05:35:35 crc kubenswrapper[5050]: I0131 05:35:35.880851 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:36 crc kubenswrapper[5050]: I0131 05:35:36.029391 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" event={"ID":"e0cbccec-0abb-496b-99f5-3dc3e2f884a9","Type":"ContainerStarted","Data":"9349c23a5fd99ce78371164b84cb8187bd3363ff35752766d3bad622e4f0f1a8"} Jan 31 05:35:36 crc kubenswrapper[5050]: I0131 05:35:36.418066 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m"] Jan 31 05:35:36 crc kubenswrapper[5050]: W0131 05:35:36.426625 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a311a66_2a17_4fa9_8da1_1910cca8d327.slice/crio-de2d09f1f59d02fc72c828f21aac5d5adf7458f6711b3596d8efd101c547d0f4 WatchSource:0}: Error finding container de2d09f1f59d02fc72c828f21aac5d5adf7458f6711b3596d8efd101c547d0f4: Status 404 returned error can't find the container with id de2d09f1f59d02fc72c828f21aac5d5adf7458f6711b3596d8efd101c547d0f4 Jan 31 05:35:37 crc kubenswrapper[5050]: I0131 05:35:37.040041 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" event={"ID":"6a311a66-2a17-4fa9-8da1-1910cca8d327","Type":"ContainerStarted","Data":"de2d09f1f59d02fc72c828f21aac5d5adf7458f6711b3596d8efd101c547d0f4"} Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.028793 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p46z6"] Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.030425 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.045738 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p46z6"] Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.060695 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brtm7\" (UniqueName: \"kubernetes.io/projected/dcbb3d49-844f-4cab-a4d0-f63b60225f33-kube-api-access-brtm7\") pod \"redhat-marketplace-p46z6\" (UID: \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\") " pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.060774 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbb3d49-844f-4cab-a4d0-f63b60225f33-catalog-content\") pod \"redhat-marketplace-p46z6\" (UID: \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\") " pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.060821 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbb3d49-844f-4cab-a4d0-f63b60225f33-utilities\") pod \"redhat-marketplace-p46z6\" (UID: \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\") " pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.169645 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbb3d49-844f-4cab-a4d0-f63b60225f33-utilities\") pod \"redhat-marketplace-p46z6\" (UID: \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\") " pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.169718 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brtm7\" (UniqueName: \"kubernetes.io/projected/dcbb3d49-844f-4cab-a4d0-f63b60225f33-kube-api-access-brtm7\") pod \"redhat-marketplace-p46z6\" (UID: \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\") " pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.169769 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbb3d49-844f-4cab-a4d0-f63b60225f33-catalog-content\") pod \"redhat-marketplace-p46z6\" (UID: \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\") " pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.170216 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbb3d49-844f-4cab-a4d0-f63b60225f33-catalog-content\") pod \"redhat-marketplace-p46z6\" (UID: \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\") " pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.170545 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbb3d49-844f-4cab-a4d0-f63b60225f33-utilities\") pod \"redhat-marketplace-p46z6\" (UID: \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\") " pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.193539 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brtm7\" (UniqueName: \"kubernetes.io/projected/dcbb3d49-844f-4cab-a4d0-f63b60225f33-kube-api-access-brtm7\") pod \"redhat-marketplace-p46z6\" (UID: \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\") " pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:40 crc kubenswrapper[5050]: I0131 05:35:40.349944 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:41 crc kubenswrapper[5050]: I0131 05:35:41.761078 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p46z6"] Jan 31 05:35:42 crc kubenswrapper[5050]: I0131 05:35:42.082010 5050 generic.go:334] "Generic (PLEG): container finished" podID="dcbb3d49-844f-4cab-a4d0-f63b60225f33" containerID="ebedbd0070ebd83b093439e2c08989cc8ad80d421893d3043e0777d89c1b64f5" exitCode=0 Jan 31 05:35:42 crc kubenswrapper[5050]: I0131 05:35:42.082379 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p46z6" event={"ID":"dcbb3d49-844f-4cab-a4d0-f63b60225f33","Type":"ContainerDied","Data":"ebedbd0070ebd83b093439e2c08989cc8ad80d421893d3043e0777d89c1b64f5"} Jan 31 05:35:42 crc kubenswrapper[5050]: I0131 05:35:42.082753 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p46z6" event={"ID":"dcbb3d49-844f-4cab-a4d0-f63b60225f33","Type":"ContainerStarted","Data":"34898bcb80e761b2956de71a3d76ff8190f6890f3c72951935bddf6c84def091"} Jan 31 05:35:42 crc kubenswrapper[5050]: I0131 05:35:42.086295 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" event={"ID":"6a311a66-2a17-4fa9-8da1-1910cca8d327","Type":"ContainerStarted","Data":"2dc71209dcd3c82eb2ed59e390d6694b0bf163ac8a8246c7cc56d37b7e9637da"} Jan 31 05:35:42 crc kubenswrapper[5050]: I0131 05:35:42.086405 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:42 crc kubenswrapper[5050]: I0131 05:35:42.088742 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" event={"ID":"e0cbccec-0abb-496b-99f5-3dc3e2f884a9","Type":"ContainerStarted","Data":"4f518a8dba5a1d13f3e5d8ed40e7e3fbdae5533beed0e9e390519ac8c8c8ea87"} Jan 31 05:35:42 crc kubenswrapper[5050]: I0131 05:35:42.088921 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:35:42 crc kubenswrapper[5050]: I0131 05:35:42.139323 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" podStartSLOduration=1.559025132 podStartE2EDuration="7.139279288s" podCreationTimestamp="2026-01-31 05:35:35 +0000 UTC" firstStartedPulling="2026-01-31 05:35:35.763195547 +0000 UTC m=+860.812357133" lastFinishedPulling="2026-01-31 05:35:41.343449693 +0000 UTC m=+866.392611289" observedRunningTime="2026-01-31 05:35:42.132710807 +0000 UTC m=+867.181872463" watchObservedRunningTime="2026-01-31 05:35:42.139279288 +0000 UTC m=+867.188440894" Jan 31 05:35:42 crc kubenswrapper[5050]: I0131 05:35:42.169274 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" podStartSLOduration=2.159218888 podStartE2EDuration="7.16921541s" podCreationTimestamp="2026-01-31 05:35:35 +0000 UTC" firstStartedPulling="2026-01-31 05:35:36.428530554 +0000 UTC m=+861.477692150" lastFinishedPulling="2026-01-31 05:35:41.438527076 +0000 UTC m=+866.487688672" observedRunningTime="2026-01-31 05:35:42.162059664 +0000 UTC m=+867.211221280" watchObservedRunningTime="2026-01-31 05:35:42.16921541 +0000 UTC m=+867.218377016" Jan 31 05:35:43 crc kubenswrapper[5050]: I0131 05:35:43.099206 5050 generic.go:334] "Generic (PLEG): container finished" podID="dcbb3d49-844f-4cab-a4d0-f63b60225f33" containerID="d7d4e2e9766e6baf74d5861c37dbf9dcf90ca7e966f057b3d69baa1ee610da8c" exitCode=0 Jan 31 05:35:43 crc kubenswrapper[5050]: I0131 05:35:43.099731 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p46z6" event={"ID":"dcbb3d49-844f-4cab-a4d0-f63b60225f33","Type":"ContainerDied","Data":"d7d4e2e9766e6baf74d5861c37dbf9dcf90ca7e966f057b3d69baa1ee610da8c"} Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.216613 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-57lhk"] Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.218322 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.233804 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-utilities\") pod \"certified-operators-57lhk\" (UID: \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\") " pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.233929 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4pg7\" (UniqueName: \"kubernetes.io/projected/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-kube-api-access-d4pg7\") pod \"certified-operators-57lhk\" (UID: \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\") " pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.234005 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-catalog-content\") pod \"certified-operators-57lhk\" (UID: \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\") " pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.238040 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57lhk"] Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.335368 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-utilities\") pod \"certified-operators-57lhk\" (UID: \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\") " pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.336489 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4pg7\" (UniqueName: \"kubernetes.io/projected/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-kube-api-access-d4pg7\") pod \"certified-operators-57lhk\" (UID: \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\") " pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.336667 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-catalog-content\") pod \"certified-operators-57lhk\" (UID: \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\") " pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.336269 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-utilities\") pod \"certified-operators-57lhk\" (UID: \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\") " pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.337378 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-catalog-content\") pod \"certified-operators-57lhk\" (UID: \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\") " pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.359227 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4pg7\" (UniqueName: \"kubernetes.io/projected/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-kube-api-access-d4pg7\") pod \"certified-operators-57lhk\" (UID: \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\") " pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:44 crc kubenswrapper[5050]: I0131 05:35:44.547325 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:45 crc kubenswrapper[5050]: I0131 05:35:45.058299 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57lhk"] Jan 31 05:35:45 crc kubenswrapper[5050]: I0131 05:35:45.112976 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p46z6" event={"ID":"dcbb3d49-844f-4cab-a4d0-f63b60225f33","Type":"ContainerStarted","Data":"512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4"} Jan 31 05:35:45 crc kubenswrapper[5050]: I0131 05:35:45.116694 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57lhk" event={"ID":"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d","Type":"ContainerStarted","Data":"b66919050fb0778c905da01be9d746bcc5bd147c03751f412a7b1d2b2fe01e56"} Jan 31 05:35:45 crc kubenswrapper[5050]: I0131 05:35:45.772415 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p46z6" podStartSLOduration=3.770873533 podStartE2EDuration="5.772394956s" podCreationTimestamp="2026-01-31 05:35:40 +0000 UTC" firstStartedPulling="2026-01-31 05:35:42.084564524 +0000 UTC m=+867.133726140" lastFinishedPulling="2026-01-31 05:35:44.086085967 +0000 UTC m=+869.135247563" observedRunningTime="2026-01-31 05:35:45.143017207 +0000 UTC m=+870.192178803" watchObservedRunningTime="2026-01-31 05:35:45.772394956 +0000 UTC m=+870.821556552" Jan 31 05:35:46 crc kubenswrapper[5050]: I0131 05:35:46.122325 5050 generic.go:334] "Generic (PLEG): container finished" podID="c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" containerID="700488613f37a19a9c55d84f4e805ec019e7d6df3126328c9d4a8dfd5824edbe" exitCode=0 Jan 31 05:35:46 crc kubenswrapper[5050]: I0131 05:35:46.123247 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57lhk" event={"ID":"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d","Type":"ContainerDied","Data":"700488613f37a19a9c55d84f4e805ec019e7d6df3126328c9d4a8dfd5824edbe"} Jan 31 05:35:47 crc kubenswrapper[5050]: I0131 05:35:47.132173 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57lhk" event={"ID":"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d","Type":"ContainerStarted","Data":"289d6d027a481e3b6205f7f40819b225324f95a17f1ebfae26632565435bcc78"} Jan 31 05:35:48 crc kubenswrapper[5050]: I0131 05:35:48.140078 5050 generic.go:334] "Generic (PLEG): container finished" podID="c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" containerID="289d6d027a481e3b6205f7f40819b225324f95a17f1ebfae26632565435bcc78" exitCode=0 Jan 31 05:35:48 crc kubenswrapper[5050]: I0131 05:35:48.140184 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57lhk" event={"ID":"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d","Type":"ContainerDied","Data":"289d6d027a481e3b6205f7f40819b225324f95a17f1ebfae26632565435bcc78"} Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.147873 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57lhk" event={"ID":"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d","Type":"ContainerStarted","Data":"09adb9b6277db48906b5700e45d1dc4fcb51dcb6bd07fe4b7111d549de3d7295"} Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.218092 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cctl4"] Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.219428 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.235529 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cctl4"] Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.401424 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e943723-c608-4e24-a969-9790f18cf03a-catalog-content\") pod \"community-operators-cctl4\" (UID: \"8e943723-c608-4e24-a969-9790f18cf03a\") " pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.401471 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e943723-c608-4e24-a969-9790f18cf03a-utilities\") pod \"community-operators-cctl4\" (UID: \"8e943723-c608-4e24-a969-9790f18cf03a\") " pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.401708 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mjjh\" (UniqueName: \"kubernetes.io/projected/8e943723-c608-4e24-a969-9790f18cf03a-kube-api-access-7mjjh\") pod \"community-operators-cctl4\" (UID: \"8e943723-c608-4e24-a969-9790f18cf03a\") " pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.503076 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e943723-c608-4e24-a969-9790f18cf03a-catalog-content\") pod \"community-operators-cctl4\" (UID: \"8e943723-c608-4e24-a969-9790f18cf03a\") " pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.502578 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e943723-c608-4e24-a969-9790f18cf03a-catalog-content\") pod \"community-operators-cctl4\" (UID: \"8e943723-c608-4e24-a969-9790f18cf03a\") " pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.503560 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e943723-c608-4e24-a969-9790f18cf03a-utilities\") pod \"community-operators-cctl4\" (UID: \"8e943723-c608-4e24-a969-9790f18cf03a\") " pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.503621 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mjjh\" (UniqueName: \"kubernetes.io/projected/8e943723-c608-4e24-a969-9790f18cf03a-kube-api-access-7mjjh\") pod \"community-operators-cctl4\" (UID: \"8e943723-c608-4e24-a969-9790f18cf03a\") " pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.503890 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e943723-c608-4e24-a969-9790f18cf03a-utilities\") pod \"community-operators-cctl4\" (UID: \"8e943723-c608-4e24-a969-9790f18cf03a\") " pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.543967 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mjjh\" (UniqueName: \"kubernetes.io/projected/8e943723-c608-4e24-a969-9790f18cf03a-kube-api-access-7mjjh\") pod \"community-operators-cctl4\" (UID: \"8e943723-c608-4e24-a969-9790f18cf03a\") " pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:49 crc kubenswrapper[5050]: I0131 05:35:49.835441 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:50 crc kubenswrapper[5050]: I0131 05:35:50.138174 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cctl4"] Jan 31 05:35:50 crc kubenswrapper[5050]: W0131 05:35:50.146261 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e943723_c608_4e24_a969_9790f18cf03a.slice/crio-74cdfa05da80f6f91a0d7cfd4050a37284557b62c04fe5d6a240375501686831 WatchSource:0}: Error finding container 74cdfa05da80f6f91a0d7cfd4050a37284557b62c04fe5d6a240375501686831: Status 404 returned error can't find the container with id 74cdfa05da80f6f91a0d7cfd4050a37284557b62c04fe5d6a240375501686831 Jan 31 05:35:50 crc kubenswrapper[5050]: I0131 05:35:50.152476 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cctl4" event={"ID":"8e943723-c608-4e24-a969-9790f18cf03a","Type":"ContainerStarted","Data":"74cdfa05da80f6f91a0d7cfd4050a37284557b62c04fe5d6a240375501686831"} Jan 31 05:35:50 crc kubenswrapper[5050]: I0131 05:35:50.350897 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:50 crc kubenswrapper[5050]: I0131 05:35:50.350938 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:50 crc kubenswrapper[5050]: I0131 05:35:50.392077 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:50 crc kubenswrapper[5050]: I0131 05:35:50.407533 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-57lhk" podStartSLOduration=3.806472066 podStartE2EDuration="6.407512236s" podCreationTimestamp="2026-01-31 05:35:44 +0000 UTC" firstStartedPulling="2026-01-31 05:35:46.12397725 +0000 UTC m=+871.173138846" lastFinishedPulling="2026-01-31 05:35:48.72501742 +0000 UTC m=+873.774179016" observedRunningTime="2026-01-31 05:35:50.181474753 +0000 UTC m=+875.230636359" watchObservedRunningTime="2026-01-31 05:35:50.407512236 +0000 UTC m=+875.456673832" Jan 31 05:35:51 crc kubenswrapper[5050]: I0131 05:35:51.161089 5050 generic.go:334] "Generic (PLEG): container finished" podID="8e943723-c608-4e24-a969-9790f18cf03a" containerID="14994a5ce00d155176b886a8ad18217d3cb64653f6d7a1f2a3ebca409b6a008d" exitCode=0 Jan 31 05:35:51 crc kubenswrapper[5050]: I0131 05:35:51.161162 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cctl4" event={"ID":"8e943723-c608-4e24-a969-9790f18cf03a","Type":"ContainerDied","Data":"14994a5ce00d155176b886a8ad18217d3cb64653f6d7a1f2a3ebca409b6a008d"} Jan 31 05:35:51 crc kubenswrapper[5050]: I0131 05:35:51.229682 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:52 crc kubenswrapper[5050]: I0131 05:35:52.610129 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p46z6"] Jan 31 05:35:53 crc kubenswrapper[5050]: I0131 05:35:53.172768 5050 generic.go:334] "Generic (PLEG): container finished" podID="8e943723-c608-4e24-a969-9790f18cf03a" containerID="9e337aa811d9fbe6381aa60ca79d2c45c23f870996542ae968960a0c75e5db62" exitCode=0 Jan 31 05:35:53 crc kubenswrapper[5050]: I0131 05:35:53.172839 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cctl4" event={"ID":"8e943723-c608-4e24-a969-9790f18cf03a","Type":"ContainerDied","Data":"9e337aa811d9fbe6381aa60ca79d2c45c23f870996542ae968960a0c75e5db62"} Jan 31 05:35:53 crc kubenswrapper[5050]: I0131 05:35:53.173464 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p46z6" podUID="dcbb3d49-844f-4cab-a4d0-f63b60225f33" containerName="registry-server" containerID="cri-o://512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4" gracePeriod=2 Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.171742 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.182978 5050 generic.go:334] "Generic (PLEG): container finished" podID="dcbb3d49-844f-4cab-a4d0-f63b60225f33" containerID="512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4" exitCode=0 Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.183021 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p46z6" event={"ID":"dcbb3d49-844f-4cab-a4d0-f63b60225f33","Type":"ContainerDied","Data":"512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4"} Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.183046 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p46z6" event={"ID":"dcbb3d49-844f-4cab-a4d0-f63b60225f33","Type":"ContainerDied","Data":"34898bcb80e761b2956de71a3d76ff8190f6890f3c72951935bddf6c84def091"} Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.183061 5050 scope.go:117] "RemoveContainer" containerID="512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.183169 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p46z6" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.223918 5050 scope.go:117] "RemoveContainer" containerID="d7d4e2e9766e6baf74d5861c37dbf9dcf90ca7e966f057b3d69baa1ee610da8c" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.242279 5050 scope.go:117] "RemoveContainer" containerID="ebedbd0070ebd83b093439e2c08989cc8ad80d421893d3043e0777d89c1b64f5" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.259109 5050 scope.go:117] "RemoveContainer" containerID="512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4" Jan 31 05:35:54 crc kubenswrapper[5050]: E0131 05:35:54.261161 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4\": container with ID starting with 512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4 not found: ID does not exist" containerID="512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.261275 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4"} err="failed to get container status \"512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4\": rpc error: code = NotFound desc = could not find container \"512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4\": container with ID starting with 512b0bcc0a19de0110e012d8497df24f9dd1abdedd5a50de8e3a571e8bcc33e4 not found: ID does not exist" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.261313 5050 scope.go:117] "RemoveContainer" containerID="d7d4e2e9766e6baf74d5861c37dbf9dcf90ca7e966f057b3d69baa1ee610da8c" Jan 31 05:35:54 crc kubenswrapper[5050]: E0131 05:35:54.261895 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7d4e2e9766e6baf74d5861c37dbf9dcf90ca7e966f057b3d69baa1ee610da8c\": container with ID starting with d7d4e2e9766e6baf74d5861c37dbf9dcf90ca7e966f057b3d69baa1ee610da8c not found: ID does not exist" containerID="d7d4e2e9766e6baf74d5861c37dbf9dcf90ca7e966f057b3d69baa1ee610da8c" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.261934 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d4e2e9766e6baf74d5861c37dbf9dcf90ca7e966f057b3d69baa1ee610da8c"} err="failed to get container status \"d7d4e2e9766e6baf74d5861c37dbf9dcf90ca7e966f057b3d69baa1ee610da8c\": rpc error: code = NotFound desc = could not find container \"d7d4e2e9766e6baf74d5861c37dbf9dcf90ca7e966f057b3d69baa1ee610da8c\": container with ID starting with d7d4e2e9766e6baf74d5861c37dbf9dcf90ca7e966f057b3d69baa1ee610da8c not found: ID does not exist" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.261999 5050 scope.go:117] "RemoveContainer" containerID="ebedbd0070ebd83b093439e2c08989cc8ad80d421893d3043e0777d89c1b64f5" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.262519 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbb3d49-844f-4cab-a4d0-f63b60225f33-catalog-content\") pod \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\" (UID: \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\") " Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.262636 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbb3d49-844f-4cab-a4d0-f63b60225f33-utilities\") pod \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\" (UID: \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\") " Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.262694 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brtm7\" (UniqueName: \"kubernetes.io/projected/dcbb3d49-844f-4cab-a4d0-f63b60225f33-kube-api-access-brtm7\") pod \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\" (UID: \"dcbb3d49-844f-4cab-a4d0-f63b60225f33\") " Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.263775 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbb3d49-844f-4cab-a4d0-f63b60225f33-utilities" (OuterVolumeSpecName: "utilities") pod "dcbb3d49-844f-4cab-a4d0-f63b60225f33" (UID: "dcbb3d49-844f-4cab-a4d0-f63b60225f33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:35:54 crc kubenswrapper[5050]: E0131 05:35:54.269613 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebedbd0070ebd83b093439e2c08989cc8ad80d421893d3043e0777d89c1b64f5\": container with ID starting with ebedbd0070ebd83b093439e2c08989cc8ad80d421893d3043e0777d89c1b64f5 not found: ID does not exist" containerID="ebedbd0070ebd83b093439e2c08989cc8ad80d421893d3043e0777d89c1b64f5" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.269652 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebedbd0070ebd83b093439e2c08989cc8ad80d421893d3043e0777d89c1b64f5"} err="failed to get container status \"ebedbd0070ebd83b093439e2c08989cc8ad80d421893d3043e0777d89c1b64f5\": rpc error: code = NotFound desc = could not find container \"ebedbd0070ebd83b093439e2c08989cc8ad80d421893d3043e0777d89c1b64f5\": container with ID starting with ebedbd0070ebd83b093439e2c08989cc8ad80d421893d3043e0777d89c1b64f5 not found: ID does not exist" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.271893 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbb3d49-844f-4cab-a4d0-f63b60225f33-kube-api-access-brtm7" (OuterVolumeSpecName: "kube-api-access-brtm7") pod "dcbb3d49-844f-4cab-a4d0-f63b60225f33" (UID: "dcbb3d49-844f-4cab-a4d0-f63b60225f33"). InnerVolumeSpecName "kube-api-access-brtm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.290481 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbb3d49-844f-4cab-a4d0-f63b60225f33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcbb3d49-844f-4cab-a4d0-f63b60225f33" (UID: "dcbb3d49-844f-4cab-a4d0-f63b60225f33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.364283 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbb3d49-844f-4cab-a4d0-f63b60225f33-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.364517 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brtm7\" (UniqueName: \"kubernetes.io/projected/dcbb3d49-844f-4cab-a4d0-f63b60225f33-kube-api-access-brtm7\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.364594 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbb3d49-844f-4cab-a4d0-f63b60225f33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.510679 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p46z6"] Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.514347 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p46z6"] Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.548179 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.548236 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:54 crc kubenswrapper[5050]: I0131 05:35:54.607415 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:55 crc kubenswrapper[5050]: I0131 05:35:55.192998 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cctl4" event={"ID":"8e943723-c608-4e24-a969-9790f18cf03a","Type":"ContainerStarted","Data":"8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e"} Jan 31 05:35:55 crc kubenswrapper[5050]: I0131 05:35:55.263664 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:35:55 crc kubenswrapper[5050]: I0131 05:35:55.283305 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cctl4" podStartSLOduration=3.305524563 podStartE2EDuration="6.283285279s" podCreationTimestamp="2026-01-31 05:35:49 +0000 UTC" firstStartedPulling="2026-01-31 05:35:51.162484237 +0000 UTC m=+876.211645853" lastFinishedPulling="2026-01-31 05:35:54.140244973 +0000 UTC m=+879.189406569" observedRunningTime="2026-01-31 05:35:55.225797809 +0000 UTC m=+880.274959445" watchObservedRunningTime="2026-01-31 05:35:55.283285279 +0000 UTC m=+880.332446895" Jan 31 05:35:55 crc kubenswrapper[5050]: I0131 05:35:55.743935 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcbb3d49-844f-4cab-a4d0-f63b60225f33" path="/var/lib/kubelet/pods/dcbb3d49-844f-4cab-a4d0-f63b60225f33/volumes" Jan 31 05:35:55 crc kubenswrapper[5050]: I0131 05:35:55.902560 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5d49b744cb-vrv8m" Jan 31 05:35:56 crc kubenswrapper[5050]: I0131 05:35:56.208354 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-57lhk"] Jan 31 05:35:57 crc kubenswrapper[5050]: I0131 05:35:57.205440 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-57lhk" podUID="c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" containerName="registry-server" containerID="cri-o://09adb9b6277db48906b5700e45d1dc4fcb51dcb6bd07fe4b7111d549de3d7295" gracePeriod=2 Jan 31 05:35:59 crc kubenswrapper[5050]: I0131 05:35:59.219805 5050 generic.go:334] "Generic (PLEG): container finished" podID="c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" containerID="09adb9b6277db48906b5700e45d1dc4fcb51dcb6bd07fe4b7111d549de3d7295" exitCode=0 Jan 31 05:35:59 crc kubenswrapper[5050]: I0131 05:35:59.219911 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57lhk" event={"ID":"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d","Type":"ContainerDied","Data":"09adb9b6277db48906b5700e45d1dc4fcb51dcb6bd07fe4b7111d549de3d7295"} Jan 31 05:35:59 crc kubenswrapper[5050]: I0131 05:35:59.836177 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:59 crc kubenswrapper[5050]: I0131 05:35:59.836242 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:35:59 crc kubenswrapper[5050]: I0131 05:35:59.904336 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:36:00 crc kubenswrapper[5050]: I0131 05:36:00.277226 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:36:00 crc kubenswrapper[5050]: I0131 05:36:00.706340 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:36:00 crc kubenswrapper[5050]: I0131 05:36:00.859052 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4pg7\" (UniqueName: \"kubernetes.io/projected/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-kube-api-access-d4pg7\") pod \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\" (UID: \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\") " Jan 31 05:36:00 crc kubenswrapper[5050]: I0131 05:36:00.859236 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-utilities\") pod \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\" (UID: \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\") " Jan 31 05:36:00 crc kubenswrapper[5050]: I0131 05:36:00.859263 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-catalog-content\") pod \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\" (UID: \"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d\") " Jan 31 05:36:00 crc kubenswrapper[5050]: I0131 05:36:00.860314 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-utilities" (OuterVolumeSpecName: "utilities") pod "c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" (UID: "c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:36:00 crc kubenswrapper[5050]: I0131 05:36:00.871155 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-kube-api-access-d4pg7" (OuterVolumeSpecName: "kube-api-access-d4pg7") pod "c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" (UID: "c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d"). InnerVolumeSpecName "kube-api-access-d4pg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:36:00 crc kubenswrapper[5050]: I0131 05:36:00.906740 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" (UID: "c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:36:00 crc kubenswrapper[5050]: I0131 05:36:00.960333 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:36:00 crc kubenswrapper[5050]: I0131 05:36:00.960370 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:36:00 crc kubenswrapper[5050]: I0131 05:36:00.960385 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4pg7\" (UniqueName: \"kubernetes.io/projected/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d-kube-api-access-d4pg7\") on node \"crc\" DevicePath \"\"" Jan 31 05:36:01 crc kubenswrapper[5050]: I0131 05:36:01.239573 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57lhk" event={"ID":"c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d","Type":"ContainerDied","Data":"b66919050fb0778c905da01be9d746bcc5bd147c03751f412a7b1d2b2fe01e56"} Jan 31 05:36:01 crc kubenswrapper[5050]: I0131 05:36:01.239640 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57lhk" Jan 31 05:36:01 crc kubenswrapper[5050]: I0131 05:36:01.239642 5050 scope.go:117] "RemoveContainer" containerID="09adb9b6277db48906b5700e45d1dc4fcb51dcb6bd07fe4b7111d549de3d7295" Jan 31 05:36:01 crc kubenswrapper[5050]: I0131 05:36:01.270629 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-57lhk"] Jan 31 05:36:01 crc kubenswrapper[5050]: I0131 05:36:01.276217 5050 scope.go:117] "RemoveContainer" containerID="289d6d027a481e3b6205f7f40819b225324f95a17f1ebfae26632565435bcc78" Jan 31 05:36:01 crc kubenswrapper[5050]: I0131 05:36:01.276342 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-57lhk"] Jan 31 05:36:01 crc kubenswrapper[5050]: I0131 05:36:01.312442 5050 scope.go:117] "RemoveContainer" containerID="700488613f37a19a9c55d84f4e805ec019e7d6df3126328c9d4a8dfd5824edbe" Jan 31 05:36:01 crc kubenswrapper[5050]: I0131 05:36:01.747852 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" path="/var/lib/kubelet/pods/c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d/volumes" Jan 31 05:36:03 crc kubenswrapper[5050]: I0131 05:36:03.413697 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cctl4"] Jan 31 05:36:03 crc kubenswrapper[5050]: I0131 05:36:03.414160 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cctl4" podUID="8e943723-c608-4e24-a969-9790f18cf03a" containerName="registry-server" containerID="cri-o://8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e" gracePeriod=2 Jan 31 05:36:03 crc kubenswrapper[5050]: I0131 05:36:03.833090 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.000527 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e943723-c608-4e24-a969-9790f18cf03a-utilities\") pod \"8e943723-c608-4e24-a969-9790f18cf03a\" (UID: \"8e943723-c608-4e24-a969-9790f18cf03a\") " Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.000592 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e943723-c608-4e24-a969-9790f18cf03a-catalog-content\") pod \"8e943723-c608-4e24-a969-9790f18cf03a\" (UID: \"8e943723-c608-4e24-a969-9790f18cf03a\") " Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.000932 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mjjh\" (UniqueName: \"kubernetes.io/projected/8e943723-c608-4e24-a969-9790f18cf03a-kube-api-access-7mjjh\") pod \"8e943723-c608-4e24-a969-9790f18cf03a\" (UID: \"8e943723-c608-4e24-a969-9790f18cf03a\") " Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.002328 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e943723-c608-4e24-a969-9790f18cf03a-utilities" (OuterVolumeSpecName: "utilities") pod "8e943723-c608-4e24-a969-9790f18cf03a" (UID: "8e943723-c608-4e24-a969-9790f18cf03a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.009789 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e943723-c608-4e24-a969-9790f18cf03a-kube-api-access-7mjjh" (OuterVolumeSpecName: "kube-api-access-7mjjh") pod "8e943723-c608-4e24-a969-9790f18cf03a" (UID: "8e943723-c608-4e24-a969-9790f18cf03a"). InnerVolumeSpecName "kube-api-access-7mjjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.102473 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e943723-c608-4e24-a969-9790f18cf03a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.102543 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mjjh\" (UniqueName: \"kubernetes.io/projected/8e943723-c608-4e24-a969-9790f18cf03a-kube-api-access-7mjjh\") on node \"crc\" DevicePath \"\"" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.264413 5050 generic.go:334] "Generic (PLEG): container finished" podID="8e943723-c608-4e24-a969-9790f18cf03a" containerID="8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e" exitCode=0 Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.264475 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cctl4" event={"ID":"8e943723-c608-4e24-a969-9790f18cf03a","Type":"ContainerDied","Data":"8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e"} Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.264495 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cctl4" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.264524 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cctl4" event={"ID":"8e943723-c608-4e24-a969-9790f18cf03a","Type":"ContainerDied","Data":"74cdfa05da80f6f91a0d7cfd4050a37284557b62c04fe5d6a240375501686831"} Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.264554 5050 scope.go:117] "RemoveContainer" containerID="8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.275857 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e943723-c608-4e24-a969-9790f18cf03a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e943723-c608-4e24-a969-9790f18cf03a" (UID: "8e943723-c608-4e24-a969-9790f18cf03a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.289001 5050 scope.go:117] "RemoveContainer" containerID="9e337aa811d9fbe6381aa60ca79d2c45c23f870996542ae968960a0c75e5db62" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.304926 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e943723-c608-4e24-a969-9790f18cf03a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.314223 5050 scope.go:117] "RemoveContainer" containerID="14994a5ce00d155176b886a8ad18217d3cb64653f6d7a1f2a3ebca409b6a008d" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.353990 5050 scope.go:117] "RemoveContainer" containerID="8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e" Jan 31 05:36:04 crc kubenswrapper[5050]: E0131 05:36:04.354696 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e\": container with ID starting with 8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e not found: ID does not exist" containerID="8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.354753 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e"} err="failed to get container status \"8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e\": rpc error: code = NotFound desc = could not find container \"8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e\": container with ID starting with 8e400fab8f69db348ec2dae92b84ed92727a9afe15f557abd55edc7f913ca62e not found: ID does not exist" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.354795 5050 scope.go:117] "RemoveContainer" containerID="9e337aa811d9fbe6381aa60ca79d2c45c23f870996542ae968960a0c75e5db62" Jan 31 05:36:04 crc kubenswrapper[5050]: E0131 05:36:04.355413 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e337aa811d9fbe6381aa60ca79d2c45c23f870996542ae968960a0c75e5db62\": container with ID starting with 9e337aa811d9fbe6381aa60ca79d2c45c23f870996542ae968960a0c75e5db62 not found: ID does not exist" containerID="9e337aa811d9fbe6381aa60ca79d2c45c23f870996542ae968960a0c75e5db62" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.355457 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e337aa811d9fbe6381aa60ca79d2c45c23f870996542ae968960a0c75e5db62"} err="failed to get container status \"9e337aa811d9fbe6381aa60ca79d2c45c23f870996542ae968960a0c75e5db62\": rpc error: code = NotFound desc = could not find container \"9e337aa811d9fbe6381aa60ca79d2c45c23f870996542ae968960a0c75e5db62\": container with ID starting with 9e337aa811d9fbe6381aa60ca79d2c45c23f870996542ae968960a0c75e5db62 not found: ID does not exist" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.355496 5050 scope.go:117] "RemoveContainer" containerID="14994a5ce00d155176b886a8ad18217d3cb64653f6d7a1f2a3ebca409b6a008d" Jan 31 05:36:04 crc kubenswrapper[5050]: E0131 05:36:04.356004 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14994a5ce00d155176b886a8ad18217d3cb64653f6d7a1f2a3ebca409b6a008d\": container with ID starting with 14994a5ce00d155176b886a8ad18217d3cb64653f6d7a1f2a3ebca409b6a008d not found: ID does not exist" containerID="14994a5ce00d155176b886a8ad18217d3cb64653f6d7a1f2a3ebca409b6a008d" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.356050 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14994a5ce00d155176b886a8ad18217d3cb64653f6d7a1f2a3ebca409b6a008d"} err="failed to get container status \"14994a5ce00d155176b886a8ad18217d3cb64653f6d7a1f2a3ebca409b6a008d\": rpc error: code = NotFound desc = could not find container \"14994a5ce00d155176b886a8ad18217d3cb64653f6d7a1f2a3ebca409b6a008d\": container with ID starting with 14994a5ce00d155176b886a8ad18217d3cb64653f6d7a1f2a3ebca409b6a008d not found: ID does not exist" Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.629584 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cctl4"] Jan 31 05:36:04 crc kubenswrapper[5050]: I0131 05:36:04.647947 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cctl4"] Jan 31 05:36:05 crc kubenswrapper[5050]: I0131 05:36:05.745271 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e943723-c608-4e24-a969-9790f18cf03a" path="/var/lib/kubelet/pods/8e943723-c608-4e24-a969-9790f18cf03a/volumes" Jan 31 05:36:15 crc kubenswrapper[5050]: I0131 05:36:15.539302 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-699dfcf9bf-482s8" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329379 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-sh9db"] Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.329600 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" containerName="registry-server" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329612 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" containerName="registry-server" Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.329625 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb3d49-844f-4cab-a4d0-f63b60225f33" containerName="registry-server" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329631 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb3d49-844f-4cab-a4d0-f63b60225f33" containerName="registry-server" Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.329641 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e943723-c608-4e24-a969-9790f18cf03a" containerName="extract-utilities" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329647 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e943723-c608-4e24-a969-9790f18cf03a" containerName="extract-utilities" Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.329655 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb3d49-844f-4cab-a4d0-f63b60225f33" containerName="extract-utilities" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329660 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb3d49-844f-4cab-a4d0-f63b60225f33" containerName="extract-utilities" Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.329668 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" containerName="extract-utilities" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329673 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" containerName="extract-utilities" Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.329681 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbb3d49-844f-4cab-a4d0-f63b60225f33" containerName="extract-content" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329686 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbb3d49-844f-4cab-a4d0-f63b60225f33" containerName="extract-content" Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.329694 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e943723-c608-4e24-a969-9790f18cf03a" containerName="registry-server" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329699 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e943723-c608-4e24-a969-9790f18cf03a" containerName="registry-server" Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.329709 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e943723-c608-4e24-a969-9790f18cf03a" containerName="extract-content" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329714 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e943723-c608-4e24-a969-9790f18cf03a" containerName="extract-content" Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.329720 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" containerName="extract-content" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329725 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" containerName="extract-content" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329827 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e68412-9dc0-4cad-b8f0-4c5cbf66fe6d" containerName="registry-server" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329840 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e943723-c608-4e24-a969-9790f18cf03a" containerName="registry-server" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.329849 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbb3d49-844f-4cab-a4d0-f63b60225f33" containerName="registry-server" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.331667 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.333579 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.334385 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lrvn2" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.334716 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.344651 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj"] Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.345559 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.347613 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.360652 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj"] Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.430790 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wc77p"] Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.431589 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wc77p" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.433796 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.434076 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.434085 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6mc4x" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.444943 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.445549 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-wns9v"] Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.446364 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.447576 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.475730 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-wns9v"] Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.495807 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2a506e19-84f6-4f6e-a6e0-656c7a529151-reloader\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.495870 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8b6cef5-1a93-4009-9d0e-e6007edca005-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-t8bqj\" (UID: \"c8b6cef5-1a93-4009-9d0e-e6007edca005\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.495897 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56dr9\" (UniqueName: \"kubernetes.io/projected/2a506e19-84f6-4f6e-a6e0-656c7a529151-kube-api-access-56dr9\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.495925 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2a506e19-84f6-4f6e-a6e0-656c7a529151-frr-sockets\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.495960 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2a506e19-84f6-4f6e-a6e0-656c7a529151-metrics\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.496041 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96g85\" (UniqueName: \"kubernetes.io/projected/c8b6cef5-1a93-4009-9d0e-e6007edca005-kube-api-access-96g85\") pod \"frr-k8s-webhook-server-7df86c4f6c-t8bqj\" (UID: \"c8b6cef5-1a93-4009-9d0e-e6007edca005\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.496120 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2a506e19-84f6-4f6e-a6e0-656c7a529151-frr-startup\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.496168 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a506e19-84f6-4f6e-a6e0-656c7a529151-metrics-certs\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.496208 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2a506e19-84f6-4f6e-a6e0-656c7a529151-frr-conf\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597132 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2a506e19-84f6-4f6e-a6e0-656c7a529151-reloader\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597194 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8b6cef5-1a93-4009-9d0e-e6007edca005-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-t8bqj\" (UID: \"c8b6cef5-1a93-4009-9d0e-e6007edca005\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597223 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56dr9\" (UniqueName: \"kubernetes.io/projected/2a506e19-84f6-4f6e-a6e0-656c7a529151-kube-api-access-56dr9\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597254 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-memberlist\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597275 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-metallb-excludel2\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597296 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2a506e19-84f6-4f6e-a6e0-656c7a529151-frr-sockets\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597313 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2a506e19-84f6-4f6e-a6e0-656c7a529151-metrics\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597340 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2-metrics-certs\") pod \"controller-6968d8fdc4-wns9v\" (UID: \"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2\") " pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597362 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm84f\" (UniqueName: \"kubernetes.io/projected/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-kube-api-access-qm84f\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597388 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96g85\" (UniqueName: \"kubernetes.io/projected/c8b6cef5-1a93-4009-9d0e-e6007edca005-kube-api-access-96g85\") pod \"frr-k8s-webhook-server-7df86c4f6c-t8bqj\" (UID: \"c8b6cef5-1a93-4009-9d0e-e6007edca005\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597416 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkkm9\" (UniqueName: \"kubernetes.io/projected/f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2-kube-api-access-rkkm9\") pod \"controller-6968d8fdc4-wns9v\" (UID: \"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2\") " pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597444 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2-cert\") pod \"controller-6968d8fdc4-wns9v\" (UID: \"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2\") " pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597471 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2a506e19-84f6-4f6e-a6e0-656c7a529151-frr-startup\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597507 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a506e19-84f6-4f6e-a6e0-656c7a529151-metrics-certs\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597531 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-metrics-certs\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.597561 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2a506e19-84f6-4f6e-a6e0-656c7a529151-frr-conf\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.598004 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2a506e19-84f6-4f6e-a6e0-656c7a529151-frr-conf\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.598185 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2a506e19-84f6-4f6e-a6e0-656c7a529151-reloader\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.598253 5050 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.598292 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8b6cef5-1a93-4009-9d0e-e6007edca005-cert podName:c8b6cef5-1a93-4009-9d0e-e6007edca005 nodeName:}" failed. No retries permitted until 2026-01-31 05:36:17.098277367 +0000 UTC m=+902.147438953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8b6cef5-1a93-4009-9d0e-e6007edca005-cert") pod "frr-k8s-webhook-server-7df86c4f6c-t8bqj" (UID: "c8b6cef5-1a93-4009-9d0e-e6007edca005") : secret "frr-k8s-webhook-server-cert" not found Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.598790 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2a506e19-84f6-4f6e-a6e0-656c7a529151-frr-sockets\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.598988 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2a506e19-84f6-4f6e-a6e0-656c7a529151-metrics\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.599939 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2a506e19-84f6-4f6e-a6e0-656c7a529151-frr-startup\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.609342 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a506e19-84f6-4f6e-a6e0-656c7a529151-metrics-certs\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.614018 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96g85\" (UniqueName: \"kubernetes.io/projected/c8b6cef5-1a93-4009-9d0e-e6007edca005-kube-api-access-96g85\") pod \"frr-k8s-webhook-server-7df86c4f6c-t8bqj\" (UID: \"c8b6cef5-1a93-4009-9d0e-e6007edca005\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.620428 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56dr9\" (UniqueName: \"kubernetes.io/projected/2a506e19-84f6-4f6e-a6e0-656c7a529151-kube-api-access-56dr9\") pod \"frr-k8s-sh9db\" (UID: \"2a506e19-84f6-4f6e-a6e0-656c7a529151\") " pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.648497 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.698836 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-metrics-certs\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.698916 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-memberlist\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.698937 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-metallb-excludel2\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.698979 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2-metrics-certs\") pod \"controller-6968d8fdc4-wns9v\" (UID: \"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2\") " pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.699023 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm84f\" (UniqueName: \"kubernetes.io/projected/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-kube-api-access-qm84f\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.699045 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkm9\" (UniqueName: \"kubernetes.io/projected/f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2-kube-api-access-rkkm9\") pod \"controller-6968d8fdc4-wns9v\" (UID: \"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2\") " pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.699066 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2-cert\") pod \"controller-6968d8fdc4-wns9v\" (UID: \"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2\") " pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.699164 5050 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.699248 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-memberlist podName:3162d0c0-398a-4e7f-9ff9-9bfc3ed25615 nodeName:}" failed. No retries permitted until 2026-01-31 05:36:17.199226542 +0000 UTC m=+902.248388218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-memberlist") pod "speaker-wc77p" (UID: "3162d0c0-398a-4e7f-9ff9-9bfc3ed25615") : secret "metallb-memberlist" not found Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.699164 5050 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.699297 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2-metrics-certs podName:f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2 nodeName:}" failed. No retries permitted until 2026-01-31 05:36:17.199285114 +0000 UTC m=+902.248446780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2-metrics-certs") pod "controller-6968d8fdc4-wns9v" (UID: "f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2") : secret "controller-certs-secret" not found Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.699747 5050 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 31 05:36:16 crc kubenswrapper[5050]: E0131 05:36:16.699918 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-metrics-certs podName:3162d0c0-398a-4e7f-9ff9-9bfc3ed25615 nodeName:}" failed. No retries permitted until 2026-01-31 05:36:17.19989431 +0000 UTC m=+902.249055926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-metrics-certs") pod "speaker-wc77p" (UID: "3162d0c0-398a-4e7f-9ff9-9bfc3ed25615") : secret "speaker-certs-secret" not found Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.700029 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-metallb-excludel2\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.702414 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.716112 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2-cert\") pod \"controller-6968d8fdc4-wns9v\" (UID: \"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2\") " pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.717887 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm84f\" (UniqueName: \"kubernetes.io/projected/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-kube-api-access-qm84f\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:16 crc kubenswrapper[5050]: I0131 05:36:16.724076 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkm9\" (UniqueName: \"kubernetes.io/projected/f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2-kube-api-access-rkkm9\") pod \"controller-6968d8fdc4-wns9v\" (UID: \"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2\") " pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:17 crc kubenswrapper[5050]: I0131 05:36:17.106359 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8b6cef5-1a93-4009-9d0e-e6007edca005-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-t8bqj\" (UID: \"c8b6cef5-1a93-4009-9d0e-e6007edca005\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" Jan 31 05:36:17 crc kubenswrapper[5050]: I0131 05:36:17.112270 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8b6cef5-1a93-4009-9d0e-e6007edca005-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-t8bqj\" (UID: \"c8b6cef5-1a93-4009-9d0e-e6007edca005\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" Jan 31 05:36:17 crc kubenswrapper[5050]: I0131 05:36:17.207999 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-memberlist\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:17 crc kubenswrapper[5050]: I0131 05:36:17.208092 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2-metrics-certs\") pod \"controller-6968d8fdc4-wns9v\" (UID: \"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2\") " pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:17 crc kubenswrapper[5050]: I0131 05:36:17.208204 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-metrics-certs\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:17 crc kubenswrapper[5050]: E0131 05:36:17.208543 5050 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 05:36:17 crc kubenswrapper[5050]: E0131 05:36:17.208851 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-memberlist podName:3162d0c0-398a-4e7f-9ff9-9bfc3ed25615 nodeName:}" failed. No retries permitted until 2026-01-31 05:36:18.208820279 +0000 UTC m=+903.257981915 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-memberlist") pod "speaker-wc77p" (UID: "3162d0c0-398a-4e7f-9ff9-9bfc3ed25615") : secret "metallb-memberlist" not found Jan 31 05:36:17 crc kubenswrapper[5050]: I0131 05:36:17.213901 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-metrics-certs\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:17 crc kubenswrapper[5050]: I0131 05:36:17.214919 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2-metrics-certs\") pod \"controller-6968d8fdc4-wns9v\" (UID: \"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2\") " pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:17 crc kubenswrapper[5050]: I0131 05:36:17.266377 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" Jan 31 05:36:17 crc kubenswrapper[5050]: I0131 05:36:17.358707 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:17 crc kubenswrapper[5050]: I0131 05:36:17.390680 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sh9db" event={"ID":"2a506e19-84f6-4f6e-a6e0-656c7a529151","Type":"ContainerStarted","Data":"cda2aba5468293ad63fd67b0939881c16631734aaf1a6d8371b63fb98f51067a"} Jan 31 05:36:17 crc kubenswrapper[5050]: I0131 05:36:17.578655 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj"] Jan 31 05:36:17 crc kubenswrapper[5050]: I0131 05:36:17.666457 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-wns9v"] Jan 31 05:36:18 crc kubenswrapper[5050]: I0131 05:36:18.222970 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-memberlist\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:18 crc kubenswrapper[5050]: I0131 05:36:18.244230 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3162d0c0-398a-4e7f-9ff9-9bfc3ed25615-memberlist\") pod \"speaker-wc77p\" (UID: \"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615\") " pod="metallb-system/speaker-wc77p" Jan 31 05:36:18 crc kubenswrapper[5050]: I0131 05:36:18.397205 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wns9v" event={"ID":"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2","Type":"ContainerStarted","Data":"2d21a5e127d3c314f301743b82aaaa200c374aa4ee7b4e0811a3faa541acb151"} Jan 31 05:36:18 crc kubenswrapper[5050]: I0131 05:36:18.397263 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wns9v" event={"ID":"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2","Type":"ContainerStarted","Data":"d11bc60411d5d30c743963e72da70de94134d0c88fd18d09f3918023593c7bc1"} Jan 31 05:36:18 crc kubenswrapper[5050]: I0131 05:36:18.397276 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wns9v" event={"ID":"f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2","Type":"ContainerStarted","Data":"22ba8f5bae619aeb9f9684cc5c1008fd0b37e75bbd2371a1ddb7202e1b5359cb"} Jan 31 05:36:18 crc kubenswrapper[5050]: I0131 05:36:18.398246 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" event={"ID":"c8b6cef5-1a93-4009-9d0e-e6007edca005","Type":"ContainerStarted","Data":"822daeae6dbd745fdbb1a74ec9c406d857ee4791ee06e774debd5938c4199d1b"} Jan 31 05:36:18 crc kubenswrapper[5050]: I0131 05:36:18.543214 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wc77p" Jan 31 05:36:18 crc kubenswrapper[5050]: W0131 05:36:18.581893 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3162d0c0_398a_4e7f_9ff9_9bfc3ed25615.slice/crio-72d40620d06c0412557b76fbf542caf52e5343841ec7eafc04e532f2a381951e WatchSource:0}: Error finding container 72d40620d06c0412557b76fbf542caf52e5343841ec7eafc04e532f2a381951e: Status 404 returned error can't find the container with id 72d40620d06c0412557b76fbf542caf52e5343841ec7eafc04e532f2a381951e Jan 31 05:36:19 crc kubenswrapper[5050]: I0131 05:36:19.407509 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wc77p" event={"ID":"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615","Type":"ContainerStarted","Data":"32cc08fe0e4436c9f38617409e67a5ea20212dbf7a74fb157d8165f2f86c761b"} Jan 31 05:36:19 crc kubenswrapper[5050]: I0131 05:36:19.407857 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wc77p" event={"ID":"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615","Type":"ContainerStarted","Data":"2049ed3a5605874c1087a1b7471227b9788a489dc8f0112ffbc4e73d6b47ad8f"} Jan 31 05:36:19 crc kubenswrapper[5050]: I0131 05:36:19.407868 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wc77p" event={"ID":"3162d0c0-398a-4e7f-9ff9-9bfc3ed25615","Type":"ContainerStarted","Data":"72d40620d06c0412557b76fbf542caf52e5343841ec7eafc04e532f2a381951e"} Jan 31 05:36:19 crc kubenswrapper[5050]: I0131 05:36:19.407885 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:19 crc kubenswrapper[5050]: I0131 05:36:19.408120 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wc77p" Jan 31 05:36:19 crc kubenswrapper[5050]: I0131 05:36:19.429502 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-wns9v" podStartSLOduration=3.429485225 podStartE2EDuration="3.429485225s" podCreationTimestamp="2026-01-31 05:36:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:36:19.426718319 +0000 UTC m=+904.475879935" watchObservedRunningTime="2026-01-31 05:36:19.429485225 +0000 UTC m=+904.478646811" Jan 31 05:36:19 crc kubenswrapper[5050]: I0131 05:36:19.449159 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wc77p" podStartSLOduration=3.449142215 podStartE2EDuration="3.449142215s" podCreationTimestamp="2026-01-31 05:36:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:36:19.446150183 +0000 UTC m=+904.495311779" watchObservedRunningTime="2026-01-31 05:36:19.449142215 +0000 UTC m=+904.498303811" Jan 31 05:36:26 crc kubenswrapper[5050]: I0131 05:36:26.462262 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" event={"ID":"c8b6cef5-1a93-4009-9d0e-e6007edca005","Type":"ContainerStarted","Data":"0b09b98d1a448a646837922709df823b5be40749b89f6132a43d78f9232f675e"} Jan 31 05:36:26 crc kubenswrapper[5050]: I0131 05:36:26.462722 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" Jan 31 05:36:26 crc kubenswrapper[5050]: I0131 05:36:26.466297 5050 generic.go:334] "Generic (PLEG): container finished" podID="2a506e19-84f6-4f6e-a6e0-656c7a529151" containerID="1109c8d4d69356a359ee557b987463dbcd4ed220a50dd199759e9807c2771fe2" exitCode=0 Jan 31 05:36:26 crc kubenswrapper[5050]: I0131 05:36:26.466359 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sh9db" event={"ID":"2a506e19-84f6-4f6e-a6e0-656c7a529151","Type":"ContainerDied","Data":"1109c8d4d69356a359ee557b987463dbcd4ed220a50dd199759e9807c2771fe2"} Jan 31 05:36:26 crc kubenswrapper[5050]: I0131 05:36:26.487832 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" podStartSLOduration=2.645637959 podStartE2EDuration="10.487812606s" podCreationTimestamp="2026-01-31 05:36:16 +0000 UTC" firstStartedPulling="2026-01-31 05:36:17.592176446 +0000 UTC m=+902.641338052" lastFinishedPulling="2026-01-31 05:36:25.434351043 +0000 UTC m=+910.483512699" observedRunningTime="2026-01-31 05:36:26.482547241 +0000 UTC m=+911.531708837" watchObservedRunningTime="2026-01-31 05:36:26.487812606 +0000 UTC m=+911.536974202" Jan 31 05:36:27 crc kubenswrapper[5050]: I0131 05:36:27.362594 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-wns9v" Jan 31 05:36:27 crc kubenswrapper[5050]: I0131 05:36:27.477280 5050 generic.go:334] "Generic (PLEG): container finished" podID="2a506e19-84f6-4f6e-a6e0-656c7a529151" containerID="dfe39d2ad57363cd7b0db0f46462dfd05a43c2af31ba6470e5808dabe357cb09" exitCode=0 Jan 31 05:36:27 crc kubenswrapper[5050]: I0131 05:36:27.478874 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sh9db" event={"ID":"2a506e19-84f6-4f6e-a6e0-656c7a529151","Type":"ContainerDied","Data":"dfe39d2ad57363cd7b0db0f46462dfd05a43c2af31ba6470e5808dabe357cb09"} Jan 31 05:36:28 crc kubenswrapper[5050]: I0131 05:36:28.489832 5050 generic.go:334] "Generic (PLEG): container finished" podID="2a506e19-84f6-4f6e-a6e0-656c7a529151" containerID="85817b94e84b5d5c403a824c26406c68dd83afcf335c7f7c34d7915fd8c7613f" exitCode=0 Jan 31 05:36:28 crc kubenswrapper[5050]: I0131 05:36:28.489928 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sh9db" event={"ID":"2a506e19-84f6-4f6e-a6e0-656c7a529151","Type":"ContainerDied","Data":"85817b94e84b5d5c403a824c26406c68dd83afcf335c7f7c34d7915fd8c7613f"} Jan 31 05:36:28 crc kubenswrapper[5050]: I0131 05:36:28.556136 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wc77p" Jan 31 05:36:29 crc kubenswrapper[5050]: I0131 05:36:29.513612 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sh9db" event={"ID":"2a506e19-84f6-4f6e-a6e0-656c7a529151","Type":"ContainerStarted","Data":"d3ad4a5785c5bd32b5a975e5aa7521ea3794b0429a4e5fc08b141328f31abd58"} Jan 31 05:36:29 crc kubenswrapper[5050]: I0131 05:36:29.513927 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sh9db" event={"ID":"2a506e19-84f6-4f6e-a6e0-656c7a529151","Type":"ContainerStarted","Data":"80537d0610c098b303be50ac8e72b3278be674d70530f78afb72f09a17730db2"} Jan 31 05:36:29 crc kubenswrapper[5050]: I0131 05:36:29.513941 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sh9db" event={"ID":"2a506e19-84f6-4f6e-a6e0-656c7a529151","Type":"ContainerStarted","Data":"a484368fd06b5290445fde3496ad112bceacce86e18ad93b9ffd95a0677920a7"} Jan 31 05:36:29 crc kubenswrapper[5050]: I0131 05:36:29.513983 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sh9db" event={"ID":"2a506e19-84f6-4f6e-a6e0-656c7a529151","Type":"ContainerStarted","Data":"267d2fa5f7c09511d0a0a3abbb1874f18b5c2087191f45e88e5ce75b720f5080"} Jan 31 05:36:29 crc kubenswrapper[5050]: I0131 05:36:29.513998 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sh9db" event={"ID":"2a506e19-84f6-4f6e-a6e0-656c7a529151","Type":"ContainerStarted","Data":"5aae1b056f82ebd3d7ad04be13af3b26085fa500baa953eba9079f88bfcf7b1a"} Jan 31 05:36:30 crc kubenswrapper[5050]: I0131 05:36:30.526218 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sh9db" event={"ID":"2a506e19-84f6-4f6e-a6e0-656c7a529151","Type":"ContainerStarted","Data":"a786dc7a0de5c5a5e0fec5d4bf7d57f94a57fc61c6c9044bdedee25e76593400"} Jan 31 05:36:30 crc kubenswrapper[5050]: I0131 05:36:30.526579 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:30 crc kubenswrapper[5050]: I0131 05:36:30.551818 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-sh9db" podStartSLOduration=5.938388432 podStartE2EDuration="14.551797687s" podCreationTimestamp="2026-01-31 05:36:16 +0000 UTC" firstStartedPulling="2026-01-31 05:36:16.851492347 +0000 UTC m=+901.900653953" lastFinishedPulling="2026-01-31 05:36:25.464901592 +0000 UTC m=+910.514063208" observedRunningTime="2026-01-31 05:36:30.548458306 +0000 UTC m=+915.597619962" watchObservedRunningTime="2026-01-31 05:36:30.551797687 +0000 UTC m=+915.600959283" Jan 31 05:36:31 crc kubenswrapper[5050]: I0131 05:36:31.305115 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-q48p4"] Jan 31 05:36:31 crc kubenswrapper[5050]: I0131 05:36:31.306174 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q48p4" Jan 31 05:36:31 crc kubenswrapper[5050]: I0131 05:36:31.308177 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-tbk6l" Jan 31 05:36:31 crc kubenswrapper[5050]: I0131 05:36:31.308357 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 05:36:31 crc kubenswrapper[5050]: I0131 05:36:31.309228 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 05:36:31 crc kubenswrapper[5050]: I0131 05:36:31.335122 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q48p4"] Jan 31 05:36:31 crc kubenswrapper[5050]: I0131 05:36:31.425123 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v82q4\" (UniqueName: \"kubernetes.io/projected/d24e114e-6a5e-4766-996f-8f60d1c6b766-kube-api-access-v82q4\") pod \"openstack-operator-index-q48p4\" (UID: \"d24e114e-6a5e-4766-996f-8f60d1c6b766\") " pod="openstack-operators/openstack-operator-index-q48p4" Jan 31 05:36:31 crc kubenswrapper[5050]: I0131 05:36:31.526854 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v82q4\" (UniqueName: \"kubernetes.io/projected/d24e114e-6a5e-4766-996f-8f60d1c6b766-kube-api-access-v82q4\") pod \"openstack-operator-index-q48p4\" (UID: \"d24e114e-6a5e-4766-996f-8f60d1c6b766\") " pod="openstack-operators/openstack-operator-index-q48p4" Jan 31 05:36:31 crc kubenswrapper[5050]: I0131 05:36:31.560908 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v82q4\" (UniqueName: \"kubernetes.io/projected/d24e114e-6a5e-4766-996f-8f60d1c6b766-kube-api-access-v82q4\") pod \"openstack-operator-index-q48p4\" (UID: \"d24e114e-6a5e-4766-996f-8f60d1c6b766\") " pod="openstack-operators/openstack-operator-index-q48p4" Jan 31 05:36:31 crc kubenswrapper[5050]: I0131 05:36:31.626547 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q48p4" Jan 31 05:36:31 crc kubenswrapper[5050]: I0131 05:36:31.649217 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:31 crc kubenswrapper[5050]: I0131 05:36:31.698241 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:32 crc kubenswrapper[5050]: I0131 05:36:32.070683 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q48p4"] Jan 31 05:36:32 crc kubenswrapper[5050]: I0131 05:36:32.538528 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q48p4" event={"ID":"d24e114e-6a5e-4766-996f-8f60d1c6b766","Type":"ContainerStarted","Data":"d0330c15009dbc488f88c297daeeef93061a6b2ac4b11896a4f107e9efa6bf86"} Jan 31 05:36:34 crc kubenswrapper[5050]: I0131 05:36:34.676392 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-q48p4"] Jan 31 05:36:35 crc kubenswrapper[5050]: I0131 05:36:35.295758 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kvlgc"] Jan 31 05:36:35 crc kubenswrapper[5050]: I0131 05:36:35.296627 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kvlgc" Jan 31 05:36:35 crc kubenswrapper[5050]: I0131 05:36:35.319169 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kvlgc"] Jan 31 05:36:35 crc kubenswrapper[5050]: I0131 05:36:35.376223 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2cbj\" (UniqueName: \"kubernetes.io/projected/30aa3656-81d7-47e5-8671-db7d2b566aca-kube-api-access-d2cbj\") pod \"openstack-operator-index-kvlgc\" (UID: \"30aa3656-81d7-47e5-8671-db7d2b566aca\") " pod="openstack-operators/openstack-operator-index-kvlgc" Jan 31 05:36:35 crc kubenswrapper[5050]: I0131 05:36:35.477555 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2cbj\" (UniqueName: \"kubernetes.io/projected/30aa3656-81d7-47e5-8671-db7d2b566aca-kube-api-access-d2cbj\") pod \"openstack-operator-index-kvlgc\" (UID: \"30aa3656-81d7-47e5-8671-db7d2b566aca\") " pod="openstack-operators/openstack-operator-index-kvlgc" Jan 31 05:36:35 crc kubenswrapper[5050]: I0131 05:36:35.510816 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2cbj\" (UniqueName: \"kubernetes.io/projected/30aa3656-81d7-47e5-8671-db7d2b566aca-kube-api-access-d2cbj\") pod \"openstack-operator-index-kvlgc\" (UID: \"30aa3656-81d7-47e5-8671-db7d2b566aca\") " pod="openstack-operators/openstack-operator-index-kvlgc" Jan 31 05:36:35 crc kubenswrapper[5050]: I0131 05:36:35.633644 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kvlgc" Jan 31 05:36:37 crc kubenswrapper[5050]: I0131 05:36:37.276154 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-t8bqj" Jan 31 05:36:38 crc kubenswrapper[5050]: I0131 05:36:38.049114 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kvlgc"] Jan 31 05:36:38 crc kubenswrapper[5050]: W0131 05:36:38.481920 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30aa3656_81d7_47e5_8671_db7d2b566aca.slice/crio-9b49509447f689de411b95353415a5733f1a9c43131ede5af07ed5a6a3a5423c WatchSource:0}: Error finding container 9b49509447f689de411b95353415a5733f1a9c43131ede5af07ed5a6a3a5423c: Status 404 returned error can't find the container with id 9b49509447f689de411b95353415a5733f1a9c43131ede5af07ed5a6a3a5423c Jan 31 05:36:38 crc kubenswrapper[5050]: I0131 05:36:38.580532 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kvlgc" event={"ID":"30aa3656-81d7-47e5-8671-db7d2b566aca","Type":"ContainerStarted","Data":"9b49509447f689de411b95353415a5733f1a9c43131ede5af07ed5a6a3a5423c"} Jan 31 05:36:39 crc kubenswrapper[5050]: I0131 05:36:39.591387 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q48p4" event={"ID":"d24e114e-6a5e-4766-996f-8f60d1c6b766","Type":"ContainerStarted","Data":"b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8"} Jan 31 05:36:39 crc kubenswrapper[5050]: I0131 05:36:39.591561 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-q48p4" podUID="d24e114e-6a5e-4766-996f-8f60d1c6b766" containerName="registry-server" containerID="cri-o://b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8" gracePeriod=2 Jan 31 05:36:39 crc kubenswrapper[5050]: I0131 05:36:39.613173 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-q48p4" podStartSLOduration=1.7227099670000001 podStartE2EDuration="8.613143044s" podCreationTimestamp="2026-01-31 05:36:31 +0000 UTC" firstStartedPulling="2026-01-31 05:36:32.079102896 +0000 UTC m=+917.128264492" lastFinishedPulling="2026-01-31 05:36:38.969535973 +0000 UTC m=+924.018697569" observedRunningTime="2026-01-31 05:36:39.611809897 +0000 UTC m=+924.660971523" watchObservedRunningTime="2026-01-31 05:36:39.613143044 +0000 UTC m=+924.662304680" Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.071608 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q48p4" Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.166258 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v82q4\" (UniqueName: \"kubernetes.io/projected/d24e114e-6a5e-4766-996f-8f60d1c6b766-kube-api-access-v82q4\") pod \"d24e114e-6a5e-4766-996f-8f60d1c6b766\" (UID: \"d24e114e-6a5e-4766-996f-8f60d1c6b766\") " Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.189111 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d24e114e-6a5e-4766-996f-8f60d1c6b766-kube-api-access-v82q4" (OuterVolumeSpecName: "kube-api-access-v82q4") pod "d24e114e-6a5e-4766-996f-8f60d1c6b766" (UID: "d24e114e-6a5e-4766-996f-8f60d1c6b766"). InnerVolumeSpecName "kube-api-access-v82q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.267881 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v82q4\" (UniqueName: \"kubernetes.io/projected/d24e114e-6a5e-4766-996f-8f60d1c6b766-kube-api-access-v82q4\") on node \"crc\" DevicePath \"\"" Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.598763 5050 generic.go:334] "Generic (PLEG): container finished" podID="d24e114e-6a5e-4766-996f-8f60d1c6b766" containerID="b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8" exitCode=0 Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.598852 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q48p4" event={"ID":"d24e114e-6a5e-4766-996f-8f60d1c6b766","Type":"ContainerDied","Data":"b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8"} Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.600058 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q48p4" event={"ID":"d24e114e-6a5e-4766-996f-8f60d1c6b766","Type":"ContainerDied","Data":"d0330c15009dbc488f88c297daeeef93061a6b2ac4b11896a4f107e9efa6bf86"} Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.598877 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q48p4" Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.600096 5050 scope.go:117] "RemoveContainer" containerID="b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8" Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.601585 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kvlgc" event={"ID":"30aa3656-81d7-47e5-8671-db7d2b566aca","Type":"ContainerStarted","Data":"6db7a0a4fa373e753abb13edc39cfb401091584b1ee0e43d8ef9ed78020a4804"} Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.622030 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kvlgc" podStartSLOduration=4.441302431 podStartE2EDuration="5.622011574s" podCreationTimestamp="2026-01-31 05:36:35 +0000 UTC" firstStartedPulling="2026-01-31 05:36:38.484569244 +0000 UTC m=+923.533730850" lastFinishedPulling="2026-01-31 05:36:39.665278397 +0000 UTC m=+924.714439993" observedRunningTime="2026-01-31 05:36:40.619866874 +0000 UTC m=+925.669028470" watchObservedRunningTime="2026-01-31 05:36:40.622011574 +0000 UTC m=+925.671173170" Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.625729 5050 scope.go:117] "RemoveContainer" containerID="b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8" Jan 31 05:36:40 crc kubenswrapper[5050]: E0131 05:36:40.627295 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8\": container with ID starting with b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8 not found: ID does not exist" containerID="b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8" Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.627345 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8"} err="failed to get container status \"b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8\": rpc error: code = NotFound desc = could not find container \"b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8\": container with ID starting with b0f84be2ca71be3e1d338c1513fe01c8856d3f152efc61f83c96932cbbfc91d8 not found: ID does not exist" Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.652195 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-q48p4"] Jan 31 05:36:40 crc kubenswrapper[5050]: I0131 05:36:40.661450 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-q48p4"] Jan 31 05:36:41 crc kubenswrapper[5050]: I0131 05:36:41.745905 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d24e114e-6a5e-4766-996f-8f60d1c6b766" path="/var/lib/kubelet/pods/d24e114e-6a5e-4766-996f-8f60d1c6b766/volumes" Jan 31 05:36:45 crc kubenswrapper[5050]: I0131 05:36:45.634778 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-kvlgc" Jan 31 05:36:45 crc kubenswrapper[5050]: I0131 05:36:45.635251 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-kvlgc" Jan 31 05:36:45 crc kubenswrapper[5050]: I0131 05:36:45.661472 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-kvlgc" Jan 31 05:36:45 crc kubenswrapper[5050]: I0131 05:36:45.687631 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-kvlgc" Jan 31 05:36:46 crc kubenswrapper[5050]: I0131 05:36:46.652240 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-sh9db" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.334972 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl"] Jan 31 05:36:47 crc kubenswrapper[5050]: E0131 05:36:47.335309 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24e114e-6a5e-4766-996f-8f60d1c6b766" containerName="registry-server" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.335332 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24e114e-6a5e-4766-996f-8f60d1c6b766" containerName="registry-server" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.335494 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d24e114e-6a5e-4766-996f-8f60d1c6b766" containerName="registry-server" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.336624 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.338633 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-sqq6t" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.352447 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl"] Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.477200 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17fa3b32-c974-4d30-be03-3d92d42e9a79-util\") pod \"8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl\" (UID: \"17fa3b32-c974-4d30-be03-3d92d42e9a79\") " pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.477418 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79q2m\" (UniqueName: \"kubernetes.io/projected/17fa3b32-c974-4d30-be03-3d92d42e9a79-kube-api-access-79q2m\") pod \"8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl\" (UID: \"17fa3b32-c974-4d30-be03-3d92d42e9a79\") " pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.477588 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17fa3b32-c974-4d30-be03-3d92d42e9a79-bundle\") pod \"8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl\" (UID: \"17fa3b32-c974-4d30-be03-3d92d42e9a79\") " pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.578745 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17fa3b32-c974-4d30-be03-3d92d42e9a79-util\") pod \"8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl\" (UID: \"17fa3b32-c974-4d30-be03-3d92d42e9a79\") " pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.578815 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79q2m\" (UniqueName: \"kubernetes.io/projected/17fa3b32-c974-4d30-be03-3d92d42e9a79-kube-api-access-79q2m\") pod \"8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl\" (UID: \"17fa3b32-c974-4d30-be03-3d92d42e9a79\") " pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.578858 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17fa3b32-c974-4d30-be03-3d92d42e9a79-bundle\") pod \"8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl\" (UID: \"17fa3b32-c974-4d30-be03-3d92d42e9a79\") " pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.579303 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17fa3b32-c974-4d30-be03-3d92d42e9a79-bundle\") pod \"8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl\" (UID: \"17fa3b32-c974-4d30-be03-3d92d42e9a79\") " pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.579302 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17fa3b32-c974-4d30-be03-3d92d42e9a79-util\") pod \"8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl\" (UID: \"17fa3b32-c974-4d30-be03-3d92d42e9a79\") " pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.597135 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79q2m\" (UniqueName: \"kubernetes.io/projected/17fa3b32-c974-4d30-be03-3d92d42e9a79-kube-api-access-79q2m\") pod \"8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl\" (UID: \"17fa3b32-c974-4d30-be03-3d92d42e9a79\") " pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.661086 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:47 crc kubenswrapper[5050]: I0131 05:36:47.879128 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl"] Jan 31 05:36:48 crc kubenswrapper[5050]: I0131 05:36:48.668354 5050 generic.go:334] "Generic (PLEG): container finished" podID="17fa3b32-c974-4d30-be03-3d92d42e9a79" containerID="f946705988baf340d48dd980cd45d54cabcff95f0da68322d81586eab660382f" exitCode=0 Jan 31 05:36:48 crc kubenswrapper[5050]: I0131 05:36:48.668410 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" event={"ID":"17fa3b32-c974-4d30-be03-3d92d42e9a79","Type":"ContainerDied","Data":"f946705988baf340d48dd980cd45d54cabcff95f0da68322d81586eab660382f"} Jan 31 05:36:48 crc kubenswrapper[5050]: I0131 05:36:48.668625 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" event={"ID":"17fa3b32-c974-4d30-be03-3d92d42e9a79","Type":"ContainerStarted","Data":"372c5f1749d25f23e7e906294ba9326f48fc98c42c758dafe04128ed8cc15afa"} Jan 31 05:36:49 crc kubenswrapper[5050]: I0131 05:36:49.677819 5050 generic.go:334] "Generic (PLEG): container finished" podID="17fa3b32-c974-4d30-be03-3d92d42e9a79" containerID="af0320217a6e35c4d8bf1969f6368095746457b2e07fea9d65b223eaca858278" exitCode=0 Jan 31 05:36:49 crc kubenswrapper[5050]: I0131 05:36:49.677913 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" event={"ID":"17fa3b32-c974-4d30-be03-3d92d42e9a79","Type":"ContainerDied","Data":"af0320217a6e35c4d8bf1969f6368095746457b2e07fea9d65b223eaca858278"} Jan 31 05:36:50 crc kubenswrapper[5050]: I0131 05:36:50.689201 5050 generic.go:334] "Generic (PLEG): container finished" podID="17fa3b32-c974-4d30-be03-3d92d42e9a79" containerID="fc35a960c1fa6bf7c9a7d13c1d07b52526be22c69bfc8de8a23e6ebbea27abed" exitCode=0 Jan 31 05:36:50 crc kubenswrapper[5050]: I0131 05:36:50.689238 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" event={"ID":"17fa3b32-c974-4d30-be03-3d92d42e9a79","Type":"ContainerDied","Data":"fc35a960c1fa6bf7c9a7d13c1d07b52526be22c69bfc8de8a23e6ebbea27abed"} Jan 31 05:36:51 crc kubenswrapper[5050]: I0131 05:36:51.970480 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:52 crc kubenswrapper[5050]: I0131 05:36:52.042213 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17fa3b32-c974-4d30-be03-3d92d42e9a79-bundle\") pod \"17fa3b32-c974-4d30-be03-3d92d42e9a79\" (UID: \"17fa3b32-c974-4d30-be03-3d92d42e9a79\") " Jan 31 05:36:52 crc kubenswrapper[5050]: I0131 05:36:52.042423 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17fa3b32-c974-4d30-be03-3d92d42e9a79-util\") pod \"17fa3b32-c974-4d30-be03-3d92d42e9a79\" (UID: \"17fa3b32-c974-4d30-be03-3d92d42e9a79\") " Jan 31 05:36:52 crc kubenswrapper[5050]: I0131 05:36:52.042460 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79q2m\" (UniqueName: \"kubernetes.io/projected/17fa3b32-c974-4d30-be03-3d92d42e9a79-kube-api-access-79q2m\") pod \"17fa3b32-c974-4d30-be03-3d92d42e9a79\" (UID: \"17fa3b32-c974-4d30-be03-3d92d42e9a79\") " Jan 31 05:36:52 crc kubenswrapper[5050]: I0131 05:36:52.043265 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17fa3b32-c974-4d30-be03-3d92d42e9a79-bundle" (OuterVolumeSpecName: "bundle") pod "17fa3b32-c974-4d30-be03-3d92d42e9a79" (UID: "17fa3b32-c974-4d30-be03-3d92d42e9a79"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:36:52 crc kubenswrapper[5050]: I0131 05:36:52.052239 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17fa3b32-c974-4d30-be03-3d92d42e9a79-kube-api-access-79q2m" (OuterVolumeSpecName: "kube-api-access-79q2m") pod "17fa3b32-c974-4d30-be03-3d92d42e9a79" (UID: "17fa3b32-c974-4d30-be03-3d92d42e9a79"). InnerVolumeSpecName "kube-api-access-79q2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:36:52 crc kubenswrapper[5050]: I0131 05:36:52.057478 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17fa3b32-c974-4d30-be03-3d92d42e9a79-util" (OuterVolumeSpecName: "util") pod "17fa3b32-c974-4d30-be03-3d92d42e9a79" (UID: "17fa3b32-c974-4d30-be03-3d92d42e9a79"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:36:52 crc kubenswrapper[5050]: I0131 05:36:52.144749 5050 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17fa3b32-c974-4d30-be03-3d92d42e9a79-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:36:52 crc kubenswrapper[5050]: I0131 05:36:52.145219 5050 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17fa3b32-c974-4d30-be03-3d92d42e9a79-util\") on node \"crc\" DevicePath \"\"" Jan 31 05:36:52 crc kubenswrapper[5050]: I0131 05:36:52.145240 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79q2m\" (UniqueName: \"kubernetes.io/projected/17fa3b32-c974-4d30-be03-3d92d42e9a79-kube-api-access-79q2m\") on node \"crc\" DevicePath \"\"" Jan 31 05:36:52 crc kubenswrapper[5050]: I0131 05:36:52.709698 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" event={"ID":"17fa3b32-c974-4d30-be03-3d92d42e9a79","Type":"ContainerDied","Data":"372c5f1749d25f23e7e906294ba9326f48fc98c42c758dafe04128ed8cc15afa"} Jan 31 05:36:52 crc kubenswrapper[5050]: I0131 05:36:52.709765 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="372c5f1749d25f23e7e906294ba9326f48fc98c42c758dafe04128ed8cc15afa" Jan 31 05:36:52 crc kubenswrapper[5050]: I0131 05:36:52.710144 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl" Jan 31 05:36:54 crc kubenswrapper[5050]: I0131 05:36:54.521107 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-87557d48c-pffz5"] Jan 31 05:36:54 crc kubenswrapper[5050]: E0131 05:36:54.521406 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fa3b32-c974-4d30-be03-3d92d42e9a79" containerName="pull" Jan 31 05:36:54 crc kubenswrapper[5050]: I0131 05:36:54.521424 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fa3b32-c974-4d30-be03-3d92d42e9a79" containerName="pull" Jan 31 05:36:54 crc kubenswrapper[5050]: E0131 05:36:54.521439 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fa3b32-c974-4d30-be03-3d92d42e9a79" containerName="extract" Jan 31 05:36:54 crc kubenswrapper[5050]: I0131 05:36:54.521447 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fa3b32-c974-4d30-be03-3d92d42e9a79" containerName="extract" Jan 31 05:36:54 crc kubenswrapper[5050]: E0131 05:36:54.521477 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fa3b32-c974-4d30-be03-3d92d42e9a79" containerName="util" Jan 31 05:36:54 crc kubenswrapper[5050]: I0131 05:36:54.521486 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fa3b32-c974-4d30-be03-3d92d42e9a79" containerName="util" Jan 31 05:36:54 crc kubenswrapper[5050]: I0131 05:36:54.521659 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="17fa3b32-c974-4d30-be03-3d92d42e9a79" containerName="extract" Jan 31 05:36:54 crc kubenswrapper[5050]: I0131 05:36:54.522163 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-87557d48c-pffz5" Jan 31 05:36:54 crc kubenswrapper[5050]: I0131 05:36:54.529765 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-bc2bc" Jan 31 05:36:54 crc kubenswrapper[5050]: I0131 05:36:54.564046 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-87557d48c-pffz5"] Jan 31 05:36:54 crc kubenswrapper[5050]: I0131 05:36:54.693187 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbpz2\" (UniqueName: \"kubernetes.io/projected/604148c5-02b3-442c-b7d1-9e1434d74a2c-kube-api-access-nbpz2\") pod \"openstack-operator-controller-init-87557d48c-pffz5\" (UID: \"604148c5-02b3-442c-b7d1-9e1434d74a2c\") " pod="openstack-operators/openstack-operator-controller-init-87557d48c-pffz5" Jan 31 05:36:54 crc kubenswrapper[5050]: I0131 05:36:54.794566 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbpz2\" (UniqueName: \"kubernetes.io/projected/604148c5-02b3-442c-b7d1-9e1434d74a2c-kube-api-access-nbpz2\") pod \"openstack-operator-controller-init-87557d48c-pffz5\" (UID: \"604148c5-02b3-442c-b7d1-9e1434d74a2c\") " pod="openstack-operators/openstack-operator-controller-init-87557d48c-pffz5" Jan 31 05:36:54 crc kubenswrapper[5050]: I0131 05:36:54.812024 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbpz2\" (UniqueName: \"kubernetes.io/projected/604148c5-02b3-442c-b7d1-9e1434d74a2c-kube-api-access-nbpz2\") pod \"openstack-operator-controller-init-87557d48c-pffz5\" (UID: \"604148c5-02b3-442c-b7d1-9e1434d74a2c\") " pod="openstack-operators/openstack-operator-controller-init-87557d48c-pffz5" Jan 31 05:36:54 crc kubenswrapper[5050]: I0131 05:36:54.840566 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-87557d48c-pffz5" Jan 31 05:36:55 crc kubenswrapper[5050]: I0131 05:36:55.269762 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-87557d48c-pffz5"] Jan 31 05:36:55 crc kubenswrapper[5050]: I0131 05:36:55.728442 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-87557d48c-pffz5" event={"ID":"604148c5-02b3-442c-b7d1-9e1434d74a2c","Type":"ContainerStarted","Data":"3fabc2067f3038127f251c302b1bdc80acd16420e8ff733cfcc094f096d843fe"} Jan 31 05:37:00 crc kubenswrapper[5050]: I0131 05:37:00.755557 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-87557d48c-pffz5" event={"ID":"604148c5-02b3-442c-b7d1-9e1434d74a2c","Type":"ContainerStarted","Data":"8b0b1ce3de0e96cbaeeb4eb1bb3d3cb51e43033df93cadff675fe3432f7271bc"} Jan 31 05:37:01 crc kubenswrapper[5050]: I0131 05:37:01.759521 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-87557d48c-pffz5" Jan 31 05:37:01 crc kubenswrapper[5050]: I0131 05:37:01.787462 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-87557d48c-pffz5" podStartSLOduration=2.536678329 podStartE2EDuration="7.787448109s" podCreationTimestamp="2026-01-31 05:36:54 +0000 UTC" firstStartedPulling="2026-01-31 05:36:55.28479482 +0000 UTC m=+940.333956426" lastFinishedPulling="2026-01-31 05:37:00.53556461 +0000 UTC m=+945.584726206" observedRunningTime="2026-01-31 05:37:01.785618309 +0000 UTC m=+946.834779915" watchObservedRunningTime="2026-01-31 05:37:01.787448109 +0000 UTC m=+946.836609705" Jan 31 05:37:09 crc kubenswrapper[5050]: I0131 05:37:09.018174 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:37:09 crc kubenswrapper[5050]: I0131 05:37:09.018669 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:37:14 crc kubenswrapper[5050]: I0131 05:37:14.844301 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-87557d48c-pffz5" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.406729 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.408247 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.410656 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pvxkj" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.411535 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.412436 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.413879 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-dvtdb" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.420203 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.430195 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.430908 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.435226 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-nm4wd" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.437101 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.443675 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.444537 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.447313 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-cj7mc" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.460654 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.478793 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.487427 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8p6s\" (UniqueName: \"kubernetes.io/projected/6c99a6ca-0409-48ea-ab61-681b887f2f6f-kube-api-access-m8p6s\") pod \"glance-operator-controller-manager-8886f4c47-fvrm9\" (UID: \"6c99a6ca-0409-48ea-ab61-681b887f2f6f\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.487625 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlvrb\" (UniqueName: \"kubernetes.io/projected/c8455073-ced2-40e7-931f-ca08690af6d1-kube-api-access-tlvrb\") pod \"cinder-operator-controller-manager-8d874c8fc-sz8cj\" (UID: \"c8455073-ced2-40e7-931f-ca08690af6d1\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.487688 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvjmm\" (UniqueName: \"kubernetes.io/projected/97258518-ab25-46fa-85b3-bf5c65982b69-kube-api-access-qvjmm\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-k54v4\" (UID: \"97258518-ab25-46fa-85b3-bf5c65982b69\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.487718 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmsqr\" (UniqueName: \"kubernetes.io/projected/edcfb389-aa48-48d3-a408-624b6d081495-kube-api-access-hmsqr\") pod \"designate-operator-controller-manager-6d9697b7f4-vfcdz\" (UID: \"edcfb389-aa48-48d3-a408-624b6d081495\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.504533 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-qh226"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.505319 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qh226" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.507273 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hc59x" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.520212 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-qh226"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.525297 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.526135 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.531976 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xh6md" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.535429 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.555279 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-v96rv"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.559612 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.562987 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.563369 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5z4nq" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.571080 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-v96rv"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.584020 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.584760 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.589070 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t82v\" (UniqueName: \"kubernetes.io/projected/702ce305-8b7b-445c-9d94-442b12074572-kube-api-access-6t82v\") pod \"infra-operator-controller-manager-79955696d6-v96rv\" (UID: \"702ce305-8b7b-445c-9d94-442b12074572\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.589112 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8p6s\" (UniqueName: \"kubernetes.io/projected/6c99a6ca-0409-48ea-ab61-681b887f2f6f-kube-api-access-m8p6s\") pod \"glance-operator-controller-manager-8886f4c47-fvrm9\" (UID: \"6c99a6ca-0409-48ea-ab61-681b887f2f6f\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.589148 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r4zq\" (UniqueName: \"kubernetes.io/projected/3f77e259-db73-4420-9448-3d1239afe25f-kube-api-access-8r4zq\") pod \"horizon-operator-controller-manager-5fb775575f-gb8gp\" (UID: \"3f77e259-db73-4420-9448-3d1239afe25f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.589179 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert\") pod \"infra-operator-controller-manager-79955696d6-v96rv\" (UID: \"702ce305-8b7b-445c-9d94-442b12074572\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.589201 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlvrb\" (UniqueName: \"kubernetes.io/projected/c8455073-ced2-40e7-931f-ca08690af6d1-kube-api-access-tlvrb\") pod \"cinder-operator-controller-manager-8d874c8fc-sz8cj\" (UID: \"c8455073-ced2-40e7-931f-ca08690af6d1\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.589224 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvjmm\" (UniqueName: \"kubernetes.io/projected/97258518-ab25-46fa-85b3-bf5c65982b69-kube-api-access-qvjmm\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-k54v4\" (UID: \"97258518-ab25-46fa-85b3-bf5c65982b69\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.589243 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmsqr\" (UniqueName: \"kubernetes.io/projected/edcfb389-aa48-48d3-a408-624b6d081495-kube-api-access-hmsqr\") pod \"designate-operator-controller-manager-6d9697b7f4-vfcdz\" (UID: \"edcfb389-aa48-48d3-a408-624b6d081495\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.589268 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gmd5\" (UniqueName: \"kubernetes.io/projected/0cca343b-0815-48b8-a05b-9246a0235ee7-kube-api-access-2gmd5\") pod \"heat-operator-controller-manager-69d6db494d-qh226\" (UID: \"0cca343b-0815-48b8-a05b-9246a0235ee7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qh226" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.589988 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.590081 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8ss2x" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.595492 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.596192 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.601259 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zxxqh" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.607178 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.631298 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-669699fbb-92tbj"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.632007 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-669699fbb-92tbj" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.637101 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-6ncnj" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.639304 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmsqr\" (UniqueName: \"kubernetes.io/projected/edcfb389-aa48-48d3-a408-624b6d081495-kube-api-access-hmsqr\") pod \"designate-operator-controller-manager-6d9697b7f4-vfcdz\" (UID: \"edcfb389-aa48-48d3-a408-624b6d081495\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.639356 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlvrb\" (UniqueName: \"kubernetes.io/projected/c8455073-ced2-40e7-931f-ca08690af6d1-kube-api-access-tlvrb\") pod \"cinder-operator-controller-manager-8d874c8fc-sz8cj\" (UID: \"c8455073-ced2-40e7-931f-ca08690af6d1\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.639658 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.639993 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvjmm\" (UniqueName: \"kubernetes.io/projected/97258518-ab25-46fa-85b3-bf5c65982b69-kube-api-access-qvjmm\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-k54v4\" (UID: \"97258518-ab25-46fa-85b3-bf5c65982b69\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.640471 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.641261 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8p6s\" (UniqueName: \"kubernetes.io/projected/6c99a6ca-0409-48ea-ab61-681b887f2f6f-kube-api-access-m8p6s\") pod \"glance-operator-controller-manager-8886f4c47-fvrm9\" (UID: \"6c99a6ca-0409-48ea-ab61-681b887f2f6f\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.652244 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-669699fbb-92tbj"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.657425 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5xvt4" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.690177 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t82v\" (UniqueName: \"kubernetes.io/projected/702ce305-8b7b-445c-9d94-442b12074572-kube-api-access-6t82v\") pod \"infra-operator-controller-manager-79955696d6-v96rv\" (UID: \"702ce305-8b7b-445c-9d94-442b12074572\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.690231 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r4zq\" (UniqueName: \"kubernetes.io/projected/3f77e259-db73-4420-9448-3d1239afe25f-kube-api-access-8r4zq\") pod \"horizon-operator-controller-manager-5fb775575f-gb8gp\" (UID: \"3f77e259-db73-4420-9448-3d1239afe25f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.690261 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5k9m\" (UniqueName: \"kubernetes.io/projected/a072243c-9f79-4f43-86c1-7a0275aadc2d-kube-api-access-t5k9m\") pod \"keystone-operator-controller-manager-84f48565d4-x5vs7\" (UID: \"a072243c-9f79-4f43-86c1-7a0275aadc2d\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.690282 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert\") pod \"infra-operator-controller-manager-79955696d6-v96rv\" (UID: \"702ce305-8b7b-445c-9d94-442b12074572\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.690303 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnnx\" (UniqueName: \"kubernetes.io/projected/2068f2d0-6afa-4df6-9d4b-37ea15900379-kube-api-access-7hnnx\") pod \"manila-operator-controller-manager-669699fbb-92tbj\" (UID: \"2068f2d0-6afa-4df6-9d4b-37ea15900379\") " pod="openstack-operators/manila-operator-controller-manager-669699fbb-92tbj" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.690338 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gmd5\" (UniqueName: \"kubernetes.io/projected/0cca343b-0815-48b8-a05b-9246a0235ee7-kube-api-access-2gmd5\") pod \"heat-operator-controller-manager-69d6db494d-qh226\" (UID: \"0cca343b-0815-48b8-a05b-9246a0235ee7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qh226" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.690359 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l4ww\" (UniqueName: \"kubernetes.io/projected/5a2adbe7-1023-4099-a956-864a1dc07459-kube-api-access-7l4ww\") pod \"mariadb-operator-controller-manager-67bf948998-tts92\" (UID: \"5a2adbe7-1023-4099-a956-864a1dc07459\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.690374 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpqfg\" (UniqueName: \"kubernetes.io/projected/04bad2b0-6148-463e-a419-fa6c1526306c-kube-api-access-fpqfg\") pod \"ironic-operator-controller-manager-5f4b8bd54d-w5w6h\" (UID: \"04bad2b0-6148-463e-a419-fa6c1526306c\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h" Jan 31 05:37:35 crc kubenswrapper[5050]: E0131 05:37:35.690881 5050 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 05:37:35 crc kubenswrapper[5050]: E0131 05:37:35.690925 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert podName:702ce305-8b7b-445c-9d94-442b12074572 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:36.190907848 +0000 UTC m=+981.240069444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert") pod "infra-operator-controller-manager-79955696d6-v96rv" (UID: "702ce305-8b7b-445c-9d94-442b12074572") : secret "infra-operator-webhook-server-cert" not found Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.703138 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.703968 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.715038 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.715672 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4ch2x" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.716378 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.725307 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t82v\" (UniqueName: \"kubernetes.io/projected/702ce305-8b7b-445c-9d94-442b12074572-kube-api-access-6t82v\") pod \"infra-operator-controller-manager-79955696d6-v96rv\" (UID: \"702ce305-8b7b-445c-9d94-442b12074572\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.739402 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5zdg8" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.741445 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.754031 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gmd5\" (UniqueName: \"kubernetes.io/projected/0cca343b-0815-48b8-a05b-9246a0235ee7-kube-api-access-2gmd5\") pod \"heat-operator-controller-manager-69d6db494d-qh226\" (UID: \"0cca343b-0815-48b8-a05b-9246a0235ee7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qh226" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.771163 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.788847 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.790510 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.799090 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r4zq\" (UniqueName: \"kubernetes.io/projected/3f77e259-db73-4420-9448-3d1239afe25f-kube-api-access-8r4zq\") pod \"horizon-operator-controller-manager-5fb775575f-gb8gp\" (UID: \"3f77e259-db73-4420-9448-3d1239afe25f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.804604 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnnx\" (UniqueName: \"kubernetes.io/projected/2068f2d0-6afa-4df6-9d4b-37ea15900379-kube-api-access-7hnnx\") pod \"manila-operator-controller-manager-669699fbb-92tbj\" (UID: \"2068f2d0-6afa-4df6-9d4b-37ea15900379\") " pod="openstack-operators/manila-operator-controller-manager-669699fbb-92tbj" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.804663 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l4ww\" (UniqueName: \"kubernetes.io/projected/5a2adbe7-1023-4099-a956-864a1dc07459-kube-api-access-7l4ww\") pod \"mariadb-operator-controller-manager-67bf948998-tts92\" (UID: \"5a2adbe7-1023-4099-a956-864a1dc07459\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.804679 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpqfg\" (UniqueName: \"kubernetes.io/projected/04bad2b0-6148-463e-a419-fa6c1526306c-kube-api-access-fpqfg\") pod \"ironic-operator-controller-manager-5f4b8bd54d-w5w6h\" (UID: \"04bad2b0-6148-463e-a419-fa6c1526306c\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.804745 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c44p7\" (UniqueName: \"kubernetes.io/projected/e02cf864-b078-4d57-b75a-0f6637da6869-kube-api-access-c44p7\") pod \"nova-operator-controller-manager-55bff696bd-dg6bt\" (UID: \"e02cf864-b078-4d57-b75a-0f6637da6869\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.804768 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5k9m\" (UniqueName: \"kubernetes.io/projected/a072243c-9f79-4f43-86c1-7a0275aadc2d-kube-api-access-t5k9m\") pod \"keystone-operator-controller-manager-84f48565d4-x5vs7\" (UID: \"a072243c-9f79-4f43-86c1-7a0275aadc2d\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.804810 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgmvn\" (UniqueName: \"kubernetes.io/projected/6c249ee1-fe54-4869-a25c-b84eea14bb5c-kube-api-access-tgmvn\") pod \"neutron-operator-controller-manager-585dbc889-52jtl\" (UID: \"6c249ee1-fe54-4869-a25c-b84eea14bb5c\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.813531 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.813706 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.822163 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.822941 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qh226" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.824458 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnnx\" (UniqueName: \"kubernetes.io/projected/2068f2d0-6afa-4df6-9d4b-37ea15900379-kube-api-access-7hnnx\") pod \"manila-operator-controller-manager-669699fbb-92tbj\" (UID: \"2068f2d0-6afa-4df6-9d4b-37ea15900379\") " pod="openstack-operators/manila-operator-controller-manager-669699fbb-92tbj" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.825348 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l4ww\" (UniqueName: \"kubernetes.io/projected/5a2adbe7-1023-4099-a956-864a1dc07459-kube-api-access-7l4ww\") pod \"mariadb-operator-controller-manager-67bf948998-tts92\" (UID: \"5a2adbe7-1023-4099-a956-864a1dc07459\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.826579 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.827000 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpqfg\" (UniqueName: \"kubernetes.io/projected/04bad2b0-6148-463e-a419-fa6c1526306c-kube-api-access-fpqfg\") pod \"ironic-operator-controller-manager-5f4b8bd54d-w5w6h\" (UID: \"04bad2b0-6148-463e-a419-fa6c1526306c\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.827402 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.827444 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.828775 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5k9m\" (UniqueName: \"kubernetes.io/projected/a072243c-9f79-4f43-86c1-7a0275aadc2d-kube-api-access-t5k9m\") pod \"keystone-operator-controller-manager-84f48565d4-x5vs7\" (UID: \"a072243c-9f79-4f43-86c1-7a0275aadc2d\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.830500 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-d9sfg" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.842005 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.848148 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-57zph"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.849214 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-57zph" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.851380 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-bmg2w" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.859991 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.863333 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-57zph"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.871602 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.872478 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.880728 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.883856 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fzqg7" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.884623 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.886035 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.890873 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-khzcc" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.892149 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.894092 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.900469 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6pmzk" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.902536 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.908865 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.909531 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fngfb\" (UniqueName: \"kubernetes.io/projected/30ec4f54-d1f8-49dd-b254-7b560b08905e-kube-api-access-fngfb\") pod \"octavia-operator-controller-manager-6687f8d877-768m8\" (UID: \"30ec4f54-d1f8-49dd-b254-7b560b08905e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.909567 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nd9w\" (UniqueName: \"kubernetes.io/projected/1cb7e321-484b-42e4-a276-0d27a7c5fc95-kube-api-access-5nd9w\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb\" (UID: \"1cb7e321-484b-42e4-a276-0d27a7c5fc95\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.909638 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6xwn\" (UniqueName: \"kubernetes.io/projected/e90fb46f-4a14-4b3a-a330-418fce2fec93-kube-api-access-c6xwn\") pod \"placement-operator-controller-manager-5b964cf4cd-7sw6k\" (UID: \"e90fb46f-4a14-4b3a-a330-418fce2fec93\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.909663 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcxjb\" (UniqueName: \"kubernetes.io/projected/dae8abbe-3616-42f7-875f-454d03bda074-kube-api-access-gcxjb\") pod \"ovn-operator-controller-manager-788c46999f-57zph\" (UID: \"dae8abbe-3616-42f7-875f-454d03bda074\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-57zph" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.909705 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb\" (UID: \"1cb7e321-484b-42e4-a276-0d27a7c5fc95\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.909727 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c44p7\" (UniqueName: \"kubernetes.io/projected/e02cf864-b078-4d57-b75a-0f6637da6869-kube-api-access-c44p7\") pod \"nova-operator-controller-manager-55bff696bd-dg6bt\" (UID: \"e02cf864-b078-4d57-b75a-0f6637da6869\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.909759 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgmvn\" (UniqueName: \"kubernetes.io/projected/6c249ee1-fe54-4869-a25c-b84eea14bb5c-kube-api-access-tgmvn\") pod \"neutron-operator-controller-manager-585dbc889-52jtl\" (UID: \"6c249ee1-fe54-4869-a25c-b84eea14bb5c\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.913045 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.913792 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dblfr\" (UniqueName: \"kubernetes.io/projected/1605a776-2594-4959-a36e-70245cce24b4-kube-api-access-dblfr\") pod \"swift-operator-controller-manager-68fc8c869-wb8ls\" (UID: \"1605a776-2594-4959-a36e-70245cce24b4\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.929236 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.929637 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.930628 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.934392 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-l4m7w" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.942582 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.949432 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgmvn\" (UniqueName: \"kubernetes.io/projected/6c249ee1-fe54-4869-a25c-b84eea14bb5c-kube-api-access-tgmvn\") pod \"neutron-operator-controller-manager-585dbc889-52jtl\" (UID: \"6c249ee1-fe54-4869-a25c-b84eea14bb5c\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.950821 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c44p7\" (UniqueName: \"kubernetes.io/projected/e02cf864-b078-4d57-b75a-0f6637da6869-kube-api-access-c44p7\") pod \"nova-operator-controller-manager-55bff696bd-dg6bt\" (UID: \"e02cf864-b078-4d57-b75a-0f6637da6869\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.955066 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.955872 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.957509 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-64vmm" Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.969945 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84"] Jan 31 05:37:35 crc kubenswrapper[5050]: I0131 05:37:35.989091 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.004807 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-v62tj"] Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.006562 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.017693 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-2svn8" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.018358 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb\" (UID: \"1cb7e321-484b-42e4-a276-0d27a7c5fc95\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.018415 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wch94\" (UniqueName: \"kubernetes.io/projected/4c7e8a65-3a04-4036-94bf-5df463991788-kube-api-access-wch94\") pod \"telemetry-operator-controller-manager-64b5b76f97-fcsch\" (UID: \"4c7e8a65-3a04-4036-94bf-5df463991788\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.018438 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dblfr\" (UniqueName: \"kubernetes.io/projected/1605a776-2594-4959-a36e-70245cce24b4-kube-api-access-dblfr\") pod \"swift-operator-controller-manager-68fc8c869-wb8ls\" (UID: \"1605a776-2594-4959-a36e-70245cce24b4\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.018473 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fngfb\" (UniqueName: \"kubernetes.io/projected/30ec4f54-d1f8-49dd-b254-7b560b08905e-kube-api-access-fngfb\") pod \"octavia-operator-controller-manager-6687f8d877-768m8\" (UID: \"30ec4f54-d1f8-49dd-b254-7b560b08905e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.018488 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nd9w\" (UniqueName: \"kubernetes.io/projected/1cb7e321-484b-42e4-a276-0d27a7c5fc95-kube-api-access-5nd9w\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb\" (UID: \"1cb7e321-484b-42e4-a276-0d27a7c5fc95\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.018511 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrfgj\" (UniqueName: \"kubernetes.io/projected/6fbf0eab-4931-4bb4-b894-95fb1f32407d-kube-api-access-zrfgj\") pod \"test-operator-controller-manager-56f8bfcd9f-9kr84\" (UID: \"6fbf0eab-4931-4bb4-b894-95fb1f32407d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.018544 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6xwn\" (UniqueName: \"kubernetes.io/projected/e90fb46f-4a14-4b3a-a330-418fce2fec93-kube-api-access-c6xwn\") pod \"placement-operator-controller-manager-5b964cf4cd-7sw6k\" (UID: \"e90fb46f-4a14-4b3a-a330-418fce2fec93\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.018566 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcxjb\" (UniqueName: \"kubernetes.io/projected/dae8abbe-3616-42f7-875f-454d03bda074-kube-api-access-gcxjb\") pod \"ovn-operator-controller-manager-788c46999f-57zph\" (UID: \"dae8abbe-3616-42f7-875f-454d03bda074\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-57zph" Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.018809 5050 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.018849 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert podName:1cb7e321-484b-42e4-a276-0d27a7c5fc95 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:36.51883628 +0000 UTC m=+981.567997876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" (UID: "1cb7e321-484b-42e4-a276-0d27a7c5fc95") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.018858 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-v62tj"] Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.062663 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nd9w\" (UniqueName: \"kubernetes.io/projected/1cb7e321-484b-42e4-a276-0d27a7c5fc95-kube-api-access-5nd9w\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb\" (UID: \"1cb7e321-484b-42e4-a276-0d27a7c5fc95\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.063036 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6xwn\" (UniqueName: \"kubernetes.io/projected/e90fb46f-4a14-4b3a-a330-418fce2fec93-kube-api-access-c6xwn\") pod \"placement-operator-controller-manager-5b964cf4cd-7sw6k\" (UID: \"e90fb46f-4a14-4b3a-a330-418fce2fec93\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.063320 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dblfr\" (UniqueName: \"kubernetes.io/projected/1605a776-2594-4959-a36e-70245cce24b4-kube-api-access-dblfr\") pod \"swift-operator-controller-manager-68fc8c869-wb8ls\" (UID: \"1605a776-2594-4959-a36e-70245cce24b4\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.064549 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fngfb\" (UniqueName: \"kubernetes.io/projected/30ec4f54-d1f8-49dd-b254-7b560b08905e-kube-api-access-fngfb\") pod \"octavia-operator-controller-manager-6687f8d877-768m8\" (UID: \"30ec4f54-d1f8-49dd-b254-7b560b08905e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.072549 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcxjb\" (UniqueName: \"kubernetes.io/projected/dae8abbe-3616-42f7-875f-454d03bda074-kube-api-access-gcxjb\") pod \"ovn-operator-controller-manager-788c46999f-57zph\" (UID: \"dae8abbe-3616-42f7-875f-454d03bda074\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-57zph" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.081660 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.111749 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-669699fbb-92tbj" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.123732 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.125008 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mvc9\" (UniqueName: \"kubernetes.io/projected/a56fce84-913d-42bd-9afe-8831d997c58f-kube-api-access-2mvc9\") pod \"watcher-operator-controller-manager-564965969-v62tj\" (UID: \"a56fce84-913d-42bd-9afe-8831d997c58f\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.125067 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wch94\" (UniqueName: \"kubernetes.io/projected/4c7e8a65-3a04-4036-94bf-5df463991788-kube-api-access-wch94\") pod \"telemetry-operator-controller-manager-64b5b76f97-fcsch\" (UID: \"4c7e8a65-3a04-4036-94bf-5df463991788\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.125118 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfgj\" (UniqueName: \"kubernetes.io/projected/6fbf0eab-4931-4bb4-b894-95fb1f32407d-kube-api-access-zrfgj\") pod \"test-operator-controller-manager-56f8bfcd9f-9kr84\" (UID: \"6fbf0eab-4931-4bb4-b894-95fb1f32407d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.159327 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.159573 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.162805 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wch94\" (UniqueName: \"kubernetes.io/projected/4c7e8a65-3a04-4036-94bf-5df463991788-kube-api-access-wch94\") pod \"telemetry-operator-controller-manager-64b5b76f97-fcsch\" (UID: \"4c7e8a65-3a04-4036-94bf-5df463991788\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.163080 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfgj\" (UniqueName: \"kubernetes.io/projected/6fbf0eab-4931-4bb4-b894-95fb1f32407d-kube-api-access-zrfgj\") pod \"test-operator-controller-manager-56f8bfcd9f-9kr84\" (UID: \"6fbf0eab-4931-4bb4-b894-95fb1f32407d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.165591 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.180401 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-57zph" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.206892 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.239406 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mvc9\" (UniqueName: \"kubernetes.io/projected/a56fce84-913d-42bd-9afe-8831d997c58f-kube-api-access-2mvc9\") pod \"watcher-operator-controller-manager-564965969-v62tj\" (UID: \"a56fce84-913d-42bd-9afe-8831d997c58f\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.239511 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert\") pod \"infra-operator-controller-manager-79955696d6-v96rv\" (UID: \"702ce305-8b7b-445c-9d94-442b12074572\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.240246 5050 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.240318 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert podName:702ce305-8b7b-445c-9d94-442b12074572 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:37.24029202 +0000 UTC m=+982.289453616 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert") pod "infra-operator-controller-manager-79955696d6-v96rv" (UID: "702ce305-8b7b-445c-9d94-442b12074572") : secret "infra-operator-webhook-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.243556 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds"] Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.245331 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.249606 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-z64kt" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.249851 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.250109 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.256745 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds"] Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.298538 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mvc9\" (UniqueName: \"kubernetes.io/projected/a56fce84-913d-42bd-9afe-8831d997c58f-kube-api-access-2mvc9\") pod \"watcher-operator-controller-manager-564965969-v62tj\" (UID: \"a56fce84-913d-42bd-9afe-8831d997c58f\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.341474 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csrx5\" (UniqueName: \"kubernetes.io/projected/a5c20cf0-d535-4809-a555-7f439ebcc243-kube-api-access-csrx5\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.341737 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.341765 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.346917 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8"] Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.347698 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.355387 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-w2rfr" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.386104 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8"] Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.401525 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4"] Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.442823 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csrx5\" (UniqueName: \"kubernetes.io/projected/a5c20cf0-d535-4809-a555-7f439ebcc243-kube-api-access-csrx5\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.442868 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.442897 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.442931 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9rkp\" (UniqueName: \"kubernetes.io/projected/afb534c6-c882-4e20-b9d3-c4e732f60471-kube-api-access-b9rkp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5sbj8\" (UID: \"afb534c6-c882-4e20-b9d3-c4e732f60471\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8" Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.443090 5050 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.443123 5050 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.443146 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs podName:a5c20cf0-d535-4809-a555-7f439ebcc243 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:36.943129241 +0000 UTC m=+981.992290837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs") pod "openstack-operator-controller-manager-6c7cc9dd76-c9qds" (UID: "a5c20cf0-d535-4809-a555-7f439ebcc243") : secret "metrics-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.443167 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs podName:a5c20cf0-d535-4809-a555-7f439ebcc243 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:36.943154051 +0000 UTC m=+981.992315647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs") pod "openstack-operator-controller-manager-6c7cc9dd76-c9qds" (UID: "a5c20cf0-d535-4809-a555-7f439ebcc243") : secret "webhook-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.450135 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.462273 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csrx5\" (UniqueName: \"kubernetes.io/projected/a5c20cf0-d535-4809-a555-7f439ebcc243-kube-api-access-csrx5\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.548350 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb\" (UID: \"1cb7e321-484b-42e4-a276-0d27a7c5fc95\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.548386 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9rkp\" (UniqueName: \"kubernetes.io/projected/afb534c6-c882-4e20-b9d3-c4e732f60471-kube-api-access-b9rkp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5sbj8\" (UID: \"afb534c6-c882-4e20-b9d3-c4e732f60471\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8" Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.548497 5050 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.548586 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert podName:1cb7e321-484b-42e4-a276-0d27a7c5fc95 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:37.548569024 +0000 UTC m=+982.597730620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" (UID: "1cb7e321-484b-42e4-a276-0d27a7c5fc95") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.552818 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.572170 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9rkp\" (UniqueName: \"kubernetes.io/projected/afb534c6-c882-4e20-b9d3-c4e732f60471-kube-api-access-b9rkp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5sbj8\" (UID: \"afb534c6-c882-4e20-b9d3-c4e732f60471\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.621537 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9"] Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.672228 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj"] Jan 31 05:37:36 crc kubenswrapper[5050]: W0131 05:37:36.708319 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c99a6ca_0409_48ea_ab61_681b887f2f6f.slice/crio-a4d4f5b8cfd4631c75696f1436050ee43c98a5378af62adac6674097ebea3eae WatchSource:0}: Error finding container a4d4f5b8cfd4631c75696f1436050ee43c98a5378af62adac6674097ebea3eae: Status 404 returned error can't find the container with id a4d4f5b8cfd4631c75696f1436050ee43c98a5378af62adac6674097ebea3eae Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.742590 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.841044 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-qh226"] Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.955041 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:36 crc kubenswrapper[5050]: I0131 05:37:36.955087 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.955230 5050 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.955245 5050 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.955288 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs podName:a5c20cf0-d535-4809-a555-7f439ebcc243 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:37.955272373 +0000 UTC m=+983.004433969 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs") pod "openstack-operator-controller-manager-6c7cc9dd76-c9qds" (UID: "a5c20cf0-d535-4809-a555-7f439ebcc243") : secret "webhook-server-cert" not found Jan 31 05:37:36 crc kubenswrapper[5050]: E0131 05:37:36.955313 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs podName:a5c20cf0-d535-4809-a555-7f439ebcc243 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:37.955296333 +0000 UTC m=+983.004457929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs") pod "openstack-operator-controller-manager-6c7cc9dd76-c9qds" (UID: "a5c20cf0-d535-4809-a555-7f439ebcc243") : secret "metrics-server-cert" not found Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.005157 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9" event={"ID":"6c99a6ca-0409-48ea-ab61-681b887f2f6f","Type":"ContainerStarted","Data":"a4d4f5b8cfd4631c75696f1436050ee43c98a5378af62adac6674097ebea3eae"} Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.006968 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj" event={"ID":"c8455073-ced2-40e7-931f-ca08690af6d1","Type":"ContainerStarted","Data":"fe37fb60d08b9d5edbb33433bdbddb6b9f2b1b190c6b74370864a02b26f3411d"} Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.007716 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qh226" event={"ID":"0cca343b-0815-48b8-a05b-9246a0235ee7","Type":"ContainerStarted","Data":"e2a5ea2324928d38a467f3a260676a861c0f9c9dfccbb25599bdc69a227a0946"} Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.008487 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4" event={"ID":"97258518-ab25-46fa-85b3-bf5c65982b69","Type":"ContainerStarted","Data":"a26a2fc95507d5fd3ad221a3f9f5a436b6c1cdbae0117e188c2c22cfc34078a0"} Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.180000 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h"] Jan 31 05:37:37 crc kubenswrapper[5050]: W0131 05:37:37.184271 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04bad2b0_6148_463e_a419_fa6c1526306c.slice/crio-1fbd20e72fed1aeaa9ea6b52d16c923a2e976b25d110026fa40e9b173f769c95 WatchSource:0}: Error finding container 1fbd20e72fed1aeaa9ea6b52d16c923a2e976b25d110026fa40e9b173f769c95: Status 404 returned error can't find the container with id 1fbd20e72fed1aeaa9ea6b52d16c923a2e976b25d110026fa40e9b173f769c95 Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.263132 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert\") pod \"infra-operator-controller-manager-79955696d6-v96rv\" (UID: \"702ce305-8b7b-445c-9d94-442b12074572\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.263280 5050 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.263327 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert podName:702ce305-8b7b-445c-9d94-442b12074572 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:39.26331298 +0000 UTC m=+984.312474576 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert") pod "infra-operator-controller-manager-79955696d6-v96rv" (UID: "702ce305-8b7b-445c-9d94-442b12074572") : secret "infra-operator-webhook-server-cert" not found Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.366703 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-669699fbb-92tbj"] Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.374739 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92"] Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.380006 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz"] Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.387732 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-57zph"] Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.402012 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp"] Jan 31 05:37:37 crc kubenswrapper[5050]: W0131 05:37:37.410742 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a2adbe7_1023_4099_a956_864a1dc07459.slice/crio-c956b3efe03231ea78978424418b65effb7b36e650543903fe138d3064dc82a8 WatchSource:0}: Error finding container c956b3efe03231ea78978424418b65effb7b36e650543903fe138d3064dc82a8: Status 404 returned error can't find the container with id c956b3efe03231ea78978424418b65effb7b36e650543903fe138d3064dc82a8 Jan 31 05:37:37 crc kubenswrapper[5050]: W0131 05:37:37.412074 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedcfb389_aa48_48d3_a408_624b6d081495.slice/crio-b19a1bcd47f5a17de42c0403866f1a019e8a527b1657ea7ba2090b198e4d2d49 WatchSource:0}: Error finding container b19a1bcd47f5a17de42c0403866f1a019e8a527b1657ea7ba2090b198e4d2d49: Status 404 returned error can't find the container with id b19a1bcd47f5a17de42c0403866f1a019e8a527b1657ea7ba2090b198e4d2d49 Jan 31 05:37:37 crc kubenswrapper[5050]: W0131 05:37:37.413435 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f77e259_db73_4420_9448_3d1239afe25f.slice/crio-9419b86d6a4586740629da9d1936f0aa21113cef4fac5f500de6bdcda46a4768 WatchSource:0}: Error finding container 9419b86d6a4586740629da9d1936f0aa21113cef4fac5f500de6bdcda46a4768: Status 404 returned error can't find the container with id 9419b86d6a4586740629da9d1936f0aa21113cef4fac5f500de6bdcda46a4768 Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.427812 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8"] Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.441828 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls"] Jan 31 05:37:37 crc kubenswrapper[5050]: W0131 05:37:37.443688 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1605a776_2594_4959_a36e_70245cce24b4.slice/crio-ced5ee6e5b0e26827869350cbb489602fa23d9321633a0730b8ae0eef60b79b2 WatchSource:0}: Error finding container ced5ee6e5b0e26827869350cbb489602fa23d9321633a0730b8ae0eef60b79b2: Status 404 returned error can't find the container with id ced5ee6e5b0e26827869350cbb489602fa23d9321633a0730b8ae0eef60b79b2 Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.446881 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-v62tj"] Jan 31 05:37:37 crc kubenswrapper[5050]: W0131 05:37:37.448042 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode02cf864_b078_4d57_b75a_0f6637da6869.slice/crio-8f29ca0da444f44b65c5ee084db05795fb62cd3f00a23820bfab83dc09a28360 WatchSource:0}: Error finding container 8f29ca0da444f44b65c5ee084db05795fb62cd3f00a23820bfab83dc09a28360: Status 404 returned error can't find the container with id 8f29ca0da444f44b65c5ee084db05795fb62cd3f00a23820bfab83dc09a28360 Jan 31 05:37:37 crc kubenswrapper[5050]: W0131 05:37:37.448390 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30ec4f54_d1f8_49dd_b254_7b560b08905e.slice/crio-b8e26f2adc95c4ff998e9fbb03318ad22119dfdd6f593d6c5ef77a3fff3ed958 WatchSource:0}: Error finding container b8e26f2adc95c4ff998e9fbb03318ad22119dfdd6f593d6c5ef77a3fff3ed958: Status 404 returned error can't find the container with id b8e26f2adc95c4ff998e9fbb03318ad22119dfdd6f593d6c5ef77a3fff3ed958 Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.452470 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt"] Jan 31 05:37:37 crc kubenswrapper[5050]: W0131 05:37:37.452869 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c249ee1_fe54_4869_a25c_b84eea14bb5c.slice/crio-78232b4924b0a6b80e313613561f9cd0f56be72d7f09192a29a8e95da16f1872 WatchSource:0}: Error finding container 78232b4924b0a6b80e313613561f9cd0f56be72d7f09192a29a8e95da16f1872: Status 404 returned error can't find the container with id 78232b4924b0a6b80e313613561f9cd0f56be72d7f09192a29a8e95da16f1872 Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.458148 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch"] Jan 31 05:37:37 crc kubenswrapper[5050]: W0131 05:37:37.461790 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda072243c_9f79_4f43_86c1_7a0275aadc2d.slice/crio-658e2ff13e92d8e7c9133bab5e0dcc7b7d53edbb866d6818b2b709f9f403d030 WatchSource:0}: Error finding container 658e2ff13e92d8e7c9133bab5e0dcc7b7d53edbb866d6818b2b709f9f403d030: Status 404 returned error can't find the container with id 658e2ff13e92d8e7c9133bab5e0dcc7b7d53edbb866d6818b2b709f9f403d030 Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.465331 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl"] Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.475046 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k"] Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.475250 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mvc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-v62tj_openstack-operators(a56fce84-913d-42bd-9afe-8831d997c58f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.475284 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c44p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-dg6bt_openstack-operators(e02cf864-b078-4d57-b75a-0f6637da6869): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.475370 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c6xwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-7sw6k_openstack-operators(e90fb46f-4a14-4b3a-a330-418fce2fec93): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.475475 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t5k9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-x5vs7_openstack-operators(a072243c-9f79-4f43-86c1-7a0275aadc2d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.476572 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" podUID="e90fb46f-4a14-4b3a-a330-418fce2fec93" Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.476639 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" podUID="a56fce84-913d-42bd-9afe-8831d997c58f" Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.476664 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" podUID="a072243c-9f79-4f43-86c1-7a0275aadc2d" Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.476701 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" podUID="e02cf864-b078-4d57-b75a-0f6637da6869" Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.480632 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9rkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5sbj8_openstack-operators(afb534c6-c882-4e20-b9d3-c4e732f60471): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.482867 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7"] Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.483007 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8" podUID="afb534c6-c882-4e20-b9d3-c4e732f60471" Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.487169 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8"] Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.568867 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb\" (UID: \"1cb7e321-484b-42e4-a276-0d27a7c5fc95\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.569042 5050 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.569088 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert podName:1cb7e321-484b-42e4-a276-0d27a7c5fc95 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:39.569075006 +0000 UTC m=+984.618236602 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" (UID: "1cb7e321-484b-42e4-a276-0d27a7c5fc95") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.624551 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84"] Jan 31 05:37:37 crc kubenswrapper[5050]: W0131 05:37:37.631485 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fbf0eab_4931_4bb4_b894_95fb1f32407d.slice/crio-ad5befaf378db472f908d1eec186cd9475caff8c56ff028192a04731d48d8ad0 WatchSource:0}: Error finding container ad5befaf378db472f908d1eec186cd9475caff8c56ff028192a04731d48d8ad0: Status 404 returned error can't find the container with id ad5befaf378db472f908d1eec186cd9475caff8c56ff028192a04731d48d8ad0 Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.982308 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:37 crc kubenswrapper[5050]: I0131 05:37:37.982381 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.982568 5050 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.982629 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs podName:a5c20cf0-d535-4809-a555-7f439ebcc243 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:39.982609818 +0000 UTC m=+985.031771414 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs") pod "openstack-operator-controller-manager-6c7cc9dd76-c9qds" (UID: "a5c20cf0-d535-4809-a555-7f439ebcc243") : secret "webhook-server-cert" not found Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.982691 5050 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 05:37:37 crc kubenswrapper[5050]: E0131 05:37:37.982716 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs podName:a5c20cf0-d535-4809-a555-7f439ebcc243 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:39.98270824 +0000 UTC m=+985.031869836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs") pod "openstack-operator-controller-manager-6c7cc9dd76-c9qds" (UID: "a5c20cf0-d535-4809-a555-7f439ebcc243") : secret "metrics-server-cert" not found Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.039818 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" event={"ID":"e02cf864-b078-4d57-b75a-0f6637da6869","Type":"ContainerStarted","Data":"8f29ca0da444f44b65c5ee084db05795fb62cd3f00a23820bfab83dc09a28360"} Jan 31 05:37:38 crc kubenswrapper[5050]: E0131 05:37:38.042832 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" podUID="e02cf864-b078-4d57-b75a-0f6637da6869" Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.046077 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-57zph" event={"ID":"dae8abbe-3616-42f7-875f-454d03bda074","Type":"ContainerStarted","Data":"2ccb259dfdb21dc1a8e068d2986ba47a1760220049b565029bc03bf7dda52cc1"} Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.048059 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" event={"ID":"a56fce84-913d-42bd-9afe-8831d997c58f","Type":"ContainerStarted","Data":"aa2c6e64845737b848452cac735b6159230444de293e2614f53c006606fe87e8"} Jan 31 05:37:38 crc kubenswrapper[5050]: E0131 05:37:38.049910 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" podUID="a56fce84-913d-42bd-9afe-8831d997c58f" Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.050321 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92" event={"ID":"5a2adbe7-1023-4099-a956-864a1dc07459","Type":"ContainerStarted","Data":"c956b3efe03231ea78978424418b65effb7b36e650543903fe138d3064dc82a8"} Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.057522 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz" event={"ID":"edcfb389-aa48-48d3-a408-624b6d081495","Type":"ContainerStarted","Data":"b19a1bcd47f5a17de42c0403866f1a019e8a527b1657ea7ba2090b198e4d2d49"} Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.062898 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl" event={"ID":"6c249ee1-fe54-4869-a25c-b84eea14bb5c","Type":"ContainerStarted","Data":"78232b4924b0a6b80e313613561f9cd0f56be72d7f09192a29a8e95da16f1872"} Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.064472 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8" event={"ID":"afb534c6-c882-4e20-b9d3-c4e732f60471","Type":"ContainerStarted","Data":"a1181b2293e050f0a6746c5cc7df10b093cec98b96ffafd1bd502659f604be4b"} Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.067077 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls" event={"ID":"1605a776-2594-4959-a36e-70245cce24b4","Type":"ContainerStarted","Data":"ced5ee6e5b0e26827869350cbb489602fa23d9321633a0730b8ae0eef60b79b2"} Jan 31 05:37:38 crc kubenswrapper[5050]: E0131 05:37:38.069967 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8" podUID="afb534c6-c882-4e20-b9d3-c4e732f60471" Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.073101 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" event={"ID":"a072243c-9f79-4f43-86c1-7a0275aadc2d","Type":"ContainerStarted","Data":"658e2ff13e92d8e7c9133bab5e0dcc7b7d53edbb866d6818b2b709f9f403d030"} Jan 31 05:37:38 crc kubenswrapper[5050]: E0131 05:37:38.074520 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" podUID="a072243c-9f79-4f43-86c1-7a0275aadc2d" Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.075219 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-669699fbb-92tbj" event={"ID":"2068f2d0-6afa-4df6-9d4b-37ea15900379","Type":"ContainerStarted","Data":"cd3cf29a905a46eb6915a2716e126a72b359993ba99e0b4ee23f92592507d07a"} Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.081786 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84" event={"ID":"6fbf0eab-4931-4bb4-b894-95fb1f32407d","Type":"ContainerStarted","Data":"ad5befaf378db472f908d1eec186cd9475caff8c56ff028192a04731d48d8ad0"} Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.087274 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp" event={"ID":"3f77e259-db73-4420-9448-3d1239afe25f","Type":"ContainerStarted","Data":"9419b86d6a4586740629da9d1936f0aa21113cef4fac5f500de6bdcda46a4768"} Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.094406 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8" event={"ID":"30ec4f54-d1f8-49dd-b254-7b560b08905e","Type":"ContainerStarted","Data":"b8e26f2adc95c4ff998e9fbb03318ad22119dfdd6f593d6c5ef77a3fff3ed958"} Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.098478 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" event={"ID":"e90fb46f-4a14-4b3a-a330-418fce2fec93","Type":"ContainerStarted","Data":"cec5a8dfa27c29caab50ecbbdadc4674f027abef2dd425e011146bf7e0749d0b"} Jan 31 05:37:38 crc kubenswrapper[5050]: E0131 05:37:38.106278 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" podUID="e90fb46f-4a14-4b3a-a330-418fce2fec93" Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.120355 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h" event={"ID":"04bad2b0-6148-463e-a419-fa6c1526306c","Type":"ContainerStarted","Data":"1fbd20e72fed1aeaa9ea6b52d16c923a2e976b25d110026fa40e9b173f769c95"} Jan 31 05:37:38 crc kubenswrapper[5050]: I0131 05:37:38.122648 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch" event={"ID":"4c7e8a65-3a04-4036-94bf-5df463991788","Type":"ContainerStarted","Data":"8c204feca566975c72bfdd78b551f65fd699cfaa4c90f4b21e631377cd2fae80"} Jan 31 05:37:39 crc kubenswrapper[5050]: I0131 05:37:39.018343 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:37:39 crc kubenswrapper[5050]: I0131 05:37:39.018682 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:37:39 crc kubenswrapper[5050]: E0131 05:37:39.144727 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" podUID="e90fb46f-4a14-4b3a-a330-418fce2fec93" Jan 31 05:37:39 crc kubenswrapper[5050]: E0131 05:37:39.144856 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" podUID="a072243c-9f79-4f43-86c1-7a0275aadc2d" Jan 31 05:37:39 crc kubenswrapper[5050]: E0131 05:37:39.144994 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" podUID="e02cf864-b078-4d57-b75a-0f6637da6869" Jan 31 05:37:39 crc kubenswrapper[5050]: E0131 05:37:39.145102 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8" podUID="afb534c6-c882-4e20-b9d3-c4e732f60471" Jan 31 05:37:39 crc kubenswrapper[5050]: E0131 05:37:39.146030 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" podUID="a56fce84-913d-42bd-9afe-8831d997c58f" Jan 31 05:37:39 crc kubenswrapper[5050]: I0131 05:37:39.313758 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert\") pod \"infra-operator-controller-manager-79955696d6-v96rv\" (UID: \"702ce305-8b7b-445c-9d94-442b12074572\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:37:39 crc kubenswrapper[5050]: E0131 05:37:39.313910 5050 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 05:37:39 crc kubenswrapper[5050]: E0131 05:37:39.313980 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert podName:702ce305-8b7b-445c-9d94-442b12074572 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:43.313964593 +0000 UTC m=+988.363126189 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert") pod "infra-operator-controller-manager-79955696d6-v96rv" (UID: "702ce305-8b7b-445c-9d94-442b12074572") : secret "infra-operator-webhook-server-cert" not found Jan 31 05:37:39 crc kubenswrapper[5050]: I0131 05:37:39.619274 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb\" (UID: \"1cb7e321-484b-42e4-a276-0d27a7c5fc95\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:37:39 crc kubenswrapper[5050]: E0131 05:37:39.619476 5050 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 05:37:39 crc kubenswrapper[5050]: E0131 05:37:39.619556 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert podName:1cb7e321-484b-42e4-a276-0d27a7c5fc95 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:43.619534964 +0000 UTC m=+988.668696560 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" (UID: "1cb7e321-484b-42e4-a276-0d27a7c5fc95") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 05:37:40 crc kubenswrapper[5050]: I0131 05:37:40.030287 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:40 crc kubenswrapper[5050]: I0131 05:37:40.030683 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:40 crc kubenswrapper[5050]: E0131 05:37:40.030601 5050 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 05:37:40 crc kubenswrapper[5050]: E0131 05:37:40.030858 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs podName:a5c20cf0-d535-4809-a555-7f439ebcc243 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:44.030829756 +0000 UTC m=+989.079991362 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs") pod "openstack-operator-controller-manager-6c7cc9dd76-c9qds" (UID: "a5c20cf0-d535-4809-a555-7f439ebcc243") : secret "metrics-server-cert" not found Jan 31 05:37:40 crc kubenswrapper[5050]: E0131 05:37:40.031015 5050 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 05:37:40 crc kubenswrapper[5050]: E0131 05:37:40.031091 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs podName:a5c20cf0-d535-4809-a555-7f439ebcc243 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:44.031070632 +0000 UTC m=+989.080232428 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs") pod "openstack-operator-controller-manager-6c7cc9dd76-c9qds" (UID: "a5c20cf0-d535-4809-a555-7f439ebcc243") : secret "webhook-server-cert" not found Jan 31 05:37:43 crc kubenswrapper[5050]: I0131 05:37:43.380164 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert\") pod \"infra-operator-controller-manager-79955696d6-v96rv\" (UID: \"702ce305-8b7b-445c-9d94-442b12074572\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:37:43 crc kubenswrapper[5050]: E0131 05:37:43.380488 5050 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 05:37:43 crc kubenswrapper[5050]: E0131 05:37:43.380630 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert podName:702ce305-8b7b-445c-9d94-442b12074572 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:51.38061653 +0000 UTC m=+996.429778126 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert") pod "infra-operator-controller-manager-79955696d6-v96rv" (UID: "702ce305-8b7b-445c-9d94-442b12074572") : secret "infra-operator-webhook-server-cert" not found Jan 31 05:37:43 crc kubenswrapper[5050]: I0131 05:37:43.683456 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb\" (UID: \"1cb7e321-484b-42e4-a276-0d27a7c5fc95\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:37:43 crc kubenswrapper[5050]: E0131 05:37:43.683600 5050 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 05:37:43 crc kubenswrapper[5050]: E0131 05:37:43.683663 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert podName:1cb7e321-484b-42e4-a276-0d27a7c5fc95 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:51.683648282 +0000 UTC m=+996.732809878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" (UID: "1cb7e321-484b-42e4-a276-0d27a7c5fc95") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 05:37:44 crc kubenswrapper[5050]: I0131 05:37:44.089368 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:44 crc kubenswrapper[5050]: I0131 05:37:44.089466 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:44 crc kubenswrapper[5050]: E0131 05:37:44.089666 5050 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 05:37:44 crc kubenswrapper[5050]: E0131 05:37:44.089739 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs podName:a5c20cf0-d535-4809-a555-7f439ebcc243 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:52.089716824 +0000 UTC m=+997.138878460 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs") pod "openstack-operator-controller-manager-6c7cc9dd76-c9qds" (UID: "a5c20cf0-d535-4809-a555-7f439ebcc243") : secret "webhook-server-cert" not found Jan 31 05:37:44 crc kubenswrapper[5050]: E0131 05:37:44.089917 5050 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 05:37:44 crc kubenswrapper[5050]: E0131 05:37:44.090020 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs podName:a5c20cf0-d535-4809-a555-7f439ebcc243 nodeName:}" failed. No retries permitted until 2026-01-31 05:37:52.089996961 +0000 UTC m=+997.139158567 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs") pod "openstack-operator-controller-manager-6c7cc9dd76-c9qds" (UID: "a5c20cf0-d535-4809-a555-7f439ebcc243") : secret "metrics-server-cert" not found Jan 31 05:37:48 crc kubenswrapper[5050]: E0131 05:37:48.673848 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.74:5001/openstack-k8s-operators/manila-operator:5af9394f984ff8d087dae2ba00eb37acd407a667" Jan 31 05:37:48 crc kubenswrapper[5050]: E0131 05:37:48.675178 5050 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.74:5001/openstack-k8s-operators/manila-operator:5af9394f984ff8d087dae2ba00eb37acd407a667" Jan 31 05:37:48 crc kubenswrapper[5050]: E0131 05:37:48.675415 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.74:5001/openstack-k8s-operators/manila-operator:5af9394f984ff8d087dae2ba00eb37acd407a667,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hnnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-669699fbb-92tbj_openstack-operators(2068f2d0-6afa-4df6-9d4b-37ea15900379): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 05:37:48 crc kubenswrapper[5050]: E0131 05:37:48.676610 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-669699fbb-92tbj" podUID="2068f2d0-6afa-4df6-9d4b-37ea15900379" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.224038 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj" event={"ID":"c8455073-ced2-40e7-931f-ca08690af6d1","Type":"ContainerStarted","Data":"2d0ddbaf6c6eeba6f6d55d5d09cddb9d3b586b22caecf7f70ae40c15777da8e3"} Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.224478 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.240148 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz" event={"ID":"edcfb389-aa48-48d3-a408-624b6d081495","Type":"ContainerStarted","Data":"4874edac9cbc7366b23602658f4d20d570c76bd8049d359ce0bb4c09cfe03e05"} Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.240252 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.246204 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84" event={"ID":"6fbf0eab-4931-4bb4-b894-95fb1f32407d","Type":"ContainerStarted","Data":"3823db9cf5f2298843d515efae6708f7e31db71cf16f924d48924607bd9b9dcf"} Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.246280 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.257500 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8" event={"ID":"30ec4f54-d1f8-49dd-b254-7b560b08905e","Type":"ContainerStarted","Data":"543f28c11729c6381a64f4238cdd20033731d10b38a13f1058e4c3d85efb759c"} Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.258163 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.265108 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch" event={"ID":"4c7e8a65-3a04-4036-94bf-5df463991788","Type":"ContainerStarted","Data":"b5de5a3afd7994bf305579589c802124b9dfa3ca5a70cbf0af9823b9c2682731"} Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.265215 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.269868 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls" event={"ID":"1605a776-2594-4959-a36e-70245cce24b4","Type":"ContainerStarted","Data":"0368ed4981cd67b2c16aa22c5939f90c679826640556a7ff414b9e13e71dbb1b"} Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.270629 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.277354 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qh226" event={"ID":"0cca343b-0815-48b8-a05b-9246a0235ee7","Type":"ContainerStarted","Data":"ad075ea5c9dc5bcd8c9728a3bf1acacefcf5a4ae63270c2acf6d5fc5ffe31609"} Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.277731 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qh226" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.282699 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.284737 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84" podStartSLOduration=3.18661379 podStartE2EDuration="14.28471924s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.633945039 +0000 UTC m=+982.683106635" lastFinishedPulling="2026-01-31 05:37:48.732050489 +0000 UTC m=+993.781212085" observedRunningTime="2026-01-31 05:37:49.281559075 +0000 UTC m=+994.330720671" watchObservedRunningTime="2026-01-31 05:37:49.28471924 +0000 UTC m=+994.333880836" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.285168 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj" podStartSLOduration=2.299943524 podStartE2EDuration="14.285163612s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:36.725273232 +0000 UTC m=+981.774434828" lastFinishedPulling="2026-01-31 05:37:48.71049332 +0000 UTC m=+993.759654916" observedRunningTime="2026-01-31 05:37:49.251451586 +0000 UTC m=+994.300613182" watchObservedRunningTime="2026-01-31 05:37:49.285163612 +0000 UTC m=+994.334325208" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.295000 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.298665 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4" event={"ID":"97258518-ab25-46fa-85b3-bf5c65982b69","Type":"ContainerStarted","Data":"00f901f797124a61f88b6d588e2814c1a3e4bdcf72a5e41e8864f1b456d61e44"} Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.298787 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.300581 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl" event={"ID":"6c249ee1-fe54-4869-a25c-b84eea14bb5c","Type":"ContainerStarted","Data":"6d7e65ea2db172b73e6103aac71d12d69f12cef4bf81428b71228ca916a20b3c"} Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.300725 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.310386 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h" event={"ID":"04bad2b0-6148-463e-a419-fa6c1526306c","Type":"ContainerStarted","Data":"618c31d9962bf148302e94ee2d2bb1ec756f8508f1eb5cffd65ad10981b9c913"} Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.310585 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.316864 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-57zph" Jan 31 05:37:49 crc kubenswrapper[5050]: E0131 05:37:49.317975 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.74:5001/openstack-k8s-operators/manila-operator:5af9394f984ff8d087dae2ba00eb37acd407a667\\\"\"" pod="openstack-operators/manila-operator-controller-manager-669699fbb-92tbj" podUID="2068f2d0-6afa-4df6-9d4b-37ea15900379" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.320871 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz" podStartSLOduration=3.077593881 podStartE2EDuration="14.320848121s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.417943745 +0000 UTC m=+982.467105341" lastFinishedPulling="2026-01-31 05:37:48.661197985 +0000 UTC m=+993.710359581" observedRunningTime="2026-01-31 05:37:49.311344745 +0000 UTC m=+994.360506351" watchObservedRunningTime="2026-01-31 05:37:49.320848121 +0000 UTC m=+994.370009727" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.375262 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls" podStartSLOduration=3.114936153 podStartE2EDuration="14.375239832s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.448589748 +0000 UTC m=+982.497751344" lastFinishedPulling="2026-01-31 05:37:48.708893417 +0000 UTC m=+993.758055023" observedRunningTime="2026-01-31 05:37:49.374007539 +0000 UTC m=+994.423169135" watchObservedRunningTime="2026-01-31 05:37:49.375239832 +0000 UTC m=+994.424401428" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.473592 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8" podStartSLOduration=3.217628192 podStartE2EDuration="14.473572104s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.453542181 +0000 UTC m=+982.502703777" lastFinishedPulling="2026-01-31 05:37:48.709486093 +0000 UTC m=+993.758647689" observedRunningTime="2026-01-31 05:37:49.469035853 +0000 UTC m=+994.518197459" watchObservedRunningTime="2026-01-31 05:37:49.473572104 +0000 UTC m=+994.522733700" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.630102 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92" podStartSLOduration=3.348024247 podStartE2EDuration="14.63008026s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.430203295 +0000 UTC m=+982.479364891" lastFinishedPulling="2026-01-31 05:37:48.712259308 +0000 UTC m=+993.761420904" observedRunningTime="2026-01-31 05:37:49.626630368 +0000 UTC m=+994.675791974" watchObservedRunningTime="2026-01-31 05:37:49.63008026 +0000 UTC m=+994.679241856" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.633332 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9" podStartSLOduration=2.660929964 podStartE2EDuration="14.633312747s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:36.711471931 +0000 UTC m=+981.760633527" lastFinishedPulling="2026-01-31 05:37:48.683854694 +0000 UTC m=+993.733016310" observedRunningTime="2026-01-31 05:37:49.535504109 +0000 UTC m=+994.584665715" watchObservedRunningTime="2026-01-31 05:37:49.633312747 +0000 UTC m=+994.682474343" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.716773 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch" podStartSLOduration=3.439690879 podStartE2EDuration="14.716749299s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.46317672 +0000 UTC m=+982.512338316" lastFinishedPulling="2026-01-31 05:37:48.74023514 +0000 UTC m=+993.789396736" observedRunningTime="2026-01-31 05:37:49.715985428 +0000 UTC m=+994.765147034" watchObservedRunningTime="2026-01-31 05:37:49.716749299 +0000 UTC m=+994.765910895" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.930706 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qh226" podStartSLOduration=3.095921584 podStartE2EDuration="14.930689648s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:36.86287021 +0000 UTC m=+981.912031806" lastFinishedPulling="2026-01-31 05:37:48.697638234 +0000 UTC m=+993.746799870" observedRunningTime="2026-01-31 05:37:49.815832382 +0000 UTC m=+994.864993978" watchObservedRunningTime="2026-01-31 05:37:49.930689648 +0000 UTC m=+994.979851244" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.931682 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl" podStartSLOduration=3.663527104 podStartE2EDuration="14.931676494s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.461569407 +0000 UTC m=+982.510731003" lastFinishedPulling="2026-01-31 05:37:48.729718787 +0000 UTC m=+993.778880393" observedRunningTime="2026-01-31 05:37:49.926204798 +0000 UTC m=+994.975366394" watchObservedRunningTime="2026-01-31 05:37:49.931676494 +0000 UTC m=+994.980838090" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.967504 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h" podStartSLOduration=3.492490829 podStartE2EDuration="14.967485207s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.194052889 +0000 UTC m=+982.243214475" lastFinishedPulling="2026-01-31 05:37:48.669047216 +0000 UTC m=+993.718208853" observedRunningTime="2026-01-31 05:37:49.962285717 +0000 UTC m=+995.011447313" watchObservedRunningTime="2026-01-31 05:37:49.967485207 +0000 UTC m=+995.016646803" Jan 31 05:37:49 crc kubenswrapper[5050]: I0131 05:37:49.993850 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4" podStartSLOduration=2.844608201 podStartE2EDuration="14.993827725s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:36.511861988 +0000 UTC m=+981.561023574" lastFinishedPulling="2026-01-31 05:37:48.661081502 +0000 UTC m=+993.710243098" observedRunningTime="2026-01-31 05:37:49.992434297 +0000 UTC m=+995.041595883" watchObservedRunningTime="2026-01-31 05:37:49.993827725 +0000 UTC m=+995.042989331" Jan 31 05:37:50 crc kubenswrapper[5050]: I0131 05:37:50.115934 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-57zph" podStartSLOduration=3.884882383 podStartE2EDuration="15.115913325s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.430183074 +0000 UTC m=+982.479344670" lastFinishedPulling="2026-01-31 05:37:48.661214006 +0000 UTC m=+993.710375612" observedRunningTime="2026-01-31 05:37:50.077524984 +0000 UTC m=+995.126686590" watchObservedRunningTime="2026-01-31 05:37:50.115913325 +0000 UTC m=+995.165074931" Jan 31 05:37:50 crc kubenswrapper[5050]: I0131 05:37:50.324516 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9" event={"ID":"6c99a6ca-0409-48ea-ab61-681b887f2f6f","Type":"ContainerStarted","Data":"839d1b51a412fd837790f9f3bdbf4cb21c51fda1264f8fc1b9095039512543eb"} Jan 31 05:37:50 crc kubenswrapper[5050]: I0131 05:37:50.327003 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92" event={"ID":"5a2adbe7-1023-4099-a956-864a1dc07459","Type":"ContainerStarted","Data":"a62d43e046cb0666e9a1c045c140d5021d0a5f477cdf467d5face1ff34401316"} Jan 31 05:37:50 crc kubenswrapper[5050]: I0131 05:37:50.328726 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp" event={"ID":"3f77e259-db73-4420-9448-3d1239afe25f","Type":"ContainerStarted","Data":"d0052a6e980da46ca0c78fe9cce35c6b68e2664416cb467a87f4f08bf5cceacf"} Jan 31 05:37:50 crc kubenswrapper[5050]: I0131 05:37:50.329309 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp" Jan 31 05:37:50 crc kubenswrapper[5050]: I0131 05:37:50.331621 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-57zph" event={"ID":"dae8abbe-3616-42f7-875f-454d03bda074","Type":"ContainerStarted","Data":"9fab13a10bad08dfff552c438badfabac9b03c5c76ab145b78f4783a009ca5c7"} Jan 31 05:37:50 crc kubenswrapper[5050]: I0131 05:37:50.351812 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp" podStartSLOduration=4.064733287 podStartE2EDuration="15.351792844s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.423480054 +0000 UTC m=+982.472641650" lastFinishedPulling="2026-01-31 05:37:48.710539581 +0000 UTC m=+993.759701207" observedRunningTime="2026-01-31 05:37:50.343686176 +0000 UTC m=+995.392847772" watchObservedRunningTime="2026-01-31 05:37:50.351792844 +0000 UTC m=+995.400954440" Jan 31 05:37:51 crc kubenswrapper[5050]: I0131 05:37:51.404041 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert\") pod \"infra-operator-controller-manager-79955696d6-v96rv\" (UID: \"702ce305-8b7b-445c-9d94-442b12074572\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:37:51 crc kubenswrapper[5050]: E0131 05:37:51.404219 5050 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 05:37:51 crc kubenswrapper[5050]: E0131 05:37:51.404265 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert podName:702ce305-8b7b-445c-9d94-442b12074572 nodeName:}" failed. No retries permitted until 2026-01-31 05:38:07.404251014 +0000 UTC m=+1012.453412610 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert") pod "infra-operator-controller-manager-79955696d6-v96rv" (UID: "702ce305-8b7b-445c-9d94-442b12074572") : secret "infra-operator-webhook-server-cert" not found Jan 31 05:37:51 crc kubenswrapper[5050]: E0131 05:37:51.709353 5050 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 05:37:51 crc kubenswrapper[5050]: E0131 05:37:51.709416 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert podName:1cb7e321-484b-42e4-a276-0d27a7c5fc95 nodeName:}" failed. No retries permitted until 2026-01-31 05:38:07.709402344 +0000 UTC m=+1012.758563940 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" (UID: "1cb7e321-484b-42e4-a276-0d27a7c5fc95") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 05:37:51 crc kubenswrapper[5050]: I0131 05:37:51.709213 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb\" (UID: \"1cb7e321-484b-42e4-a276-0d27a7c5fc95\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:37:52 crc kubenswrapper[5050]: I0131 05:37:52.116591 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:52 crc kubenswrapper[5050]: I0131 05:37:52.116663 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:37:52 crc kubenswrapper[5050]: E0131 05:37:52.116765 5050 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 05:37:52 crc kubenswrapper[5050]: E0131 05:37:52.116836 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs podName:a5c20cf0-d535-4809-a555-7f439ebcc243 nodeName:}" failed. No retries permitted until 2026-01-31 05:38:08.116820561 +0000 UTC m=+1013.165982157 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs") pod "openstack-operator-controller-manager-6c7cc9dd76-c9qds" (UID: "a5c20cf0-d535-4809-a555-7f439ebcc243") : secret "metrics-server-cert" not found Jan 31 05:37:52 crc kubenswrapper[5050]: E0131 05:37:52.116927 5050 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 05:37:52 crc kubenswrapper[5050]: E0131 05:37:52.117038 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs podName:a5c20cf0-d535-4809-a555-7f439ebcc243 nodeName:}" failed. No retries permitted until 2026-01-31 05:38:08.117017907 +0000 UTC m=+1013.166179563 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs") pod "openstack-operator-controller-manager-6c7cc9dd76-c9qds" (UID: "a5c20cf0-d535-4809-a555-7f439ebcc243") : secret "webhook-server-cert" not found Jan 31 05:37:55 crc kubenswrapper[5050]: I0131 05:37:55.751390 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-k54v4" Jan 31 05:37:55 crc kubenswrapper[5050]: I0131 05:37:55.786870 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-sz8cj" Jan 31 05:37:55 crc kubenswrapper[5050]: I0131 05:37:55.793529 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-vfcdz" Jan 31 05:37:55 crc kubenswrapper[5050]: I0131 05:37:55.800831 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-fvrm9" Jan 31 05:37:55 crc kubenswrapper[5050]: I0131 05:37:55.830327 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-tts92" Jan 31 05:37:55 crc kubenswrapper[5050]: I0131 05:37:55.832498 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qh226" Jan 31 05:37:55 crc kubenswrapper[5050]: I0131 05:37:55.866478 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8gp" Jan 31 05:37:55 crc kubenswrapper[5050]: I0131 05:37:55.934018 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w5w6h" Jan 31 05:37:56 crc kubenswrapper[5050]: I0131 05:37:56.091706 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-wb8ls" Jan 31 05:37:56 crc kubenswrapper[5050]: I0131 05:37:56.162403 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-52jtl" Jan 31 05:37:56 crc kubenswrapper[5050]: I0131 05:37:56.176961 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-768m8" Jan 31 05:37:56 crc kubenswrapper[5050]: I0131 05:37:56.189527 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-57zph" Jan 31 05:37:56 crc kubenswrapper[5050]: I0131 05:37:56.232362 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9kr84" Jan 31 05:37:56 crc kubenswrapper[5050]: I0131 05:37:56.460263 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fcsch" Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.397588 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" event={"ID":"a56fce84-913d-42bd-9afe-8831d997c58f","Type":"ContainerStarted","Data":"7c51c75f14679817d4b4b8994ddfb0c71bd35bf2dbc7655a5cab19fb58f872fe"} Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.398341 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.399214 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8" event={"ID":"afb534c6-c882-4e20-b9d3-c4e732f60471","Type":"ContainerStarted","Data":"d518f0b9671771dce5140364ee2949045211d1b6d15c6c093f0b60d5a5fc2eb2"} Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.400434 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" event={"ID":"e90fb46f-4a14-4b3a-a330-418fce2fec93","Type":"ContainerStarted","Data":"4c3b51a1917bf49364b2ba5861aa2199455af428d8dff1f296a77c4e2de03f5c"} Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.400935 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.402469 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" event={"ID":"a072243c-9f79-4f43-86c1-7a0275aadc2d","Type":"ContainerStarted","Data":"320f192ca2896e7696081c0ff7987765e8d7d6877968d968e1841cf9ea9d1889"} Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.403025 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.404387 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" event={"ID":"e02cf864-b078-4d57-b75a-0f6637da6869","Type":"ContainerStarted","Data":"83c3a9dbf4305a1cf45476e9318f3c9b48aaa85b284ec31c231142633eb8cbb2"} Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.404733 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.424055 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" podStartSLOduration=3.373988815 podStartE2EDuration="24.424037004s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.475115781 +0000 UTC m=+982.524277377" lastFinishedPulling="2026-01-31 05:37:58.52516397 +0000 UTC m=+1003.574325566" observedRunningTime="2026-01-31 05:37:59.421933648 +0000 UTC m=+1004.471095254" watchObservedRunningTime="2026-01-31 05:37:59.424037004 +0000 UTC m=+1004.473198620" Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.448462 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" podStartSLOduration=3.480758094 podStartE2EDuration="24.448438839s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.475164513 +0000 UTC m=+982.524326109" lastFinishedPulling="2026-01-31 05:37:58.442845258 +0000 UTC m=+1003.492006854" observedRunningTime="2026-01-31 05:37:59.442796898 +0000 UTC m=+1004.491958524" watchObservedRunningTime="2026-01-31 05:37:59.448438839 +0000 UTC m=+1004.497600435" Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.485865 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5sbj8" podStartSLOduration=2.460119249 podStartE2EDuration="23.485842415s" podCreationTimestamp="2026-01-31 05:37:36 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.480536927 +0000 UTC m=+982.529698523" lastFinishedPulling="2026-01-31 05:37:58.506260093 +0000 UTC m=+1003.555421689" observedRunningTime="2026-01-31 05:37:59.485451554 +0000 UTC m=+1004.534613160" watchObservedRunningTime="2026-01-31 05:37:59.485842415 +0000 UTC m=+1004.535004011" Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.512009 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" podStartSLOduration=3.487622788 podStartE2EDuration="24.511986567s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.47505798 +0000 UTC m=+982.524219566" lastFinishedPulling="2026-01-31 05:37:58.499421749 +0000 UTC m=+1003.548583345" observedRunningTime="2026-01-31 05:37:59.511009231 +0000 UTC m=+1004.560170827" watchObservedRunningTime="2026-01-31 05:37:59.511986567 +0000 UTC m=+1004.561148173" Jan 31 05:37:59 crc kubenswrapper[5050]: I0131 05:37:59.543812 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" podStartSLOduration=3.495242613 podStartE2EDuration="24.543790822s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.475303366 +0000 UTC m=+982.524464962" lastFinishedPulling="2026-01-31 05:37:58.523851575 +0000 UTC m=+1003.573013171" observedRunningTime="2026-01-31 05:37:59.537189745 +0000 UTC m=+1004.586351341" watchObservedRunningTime="2026-01-31 05:37:59.543790822 +0000 UTC m=+1004.592952418" Jan 31 05:38:00 crc kubenswrapper[5050]: I0131 05:38:00.411981 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-669699fbb-92tbj" event={"ID":"2068f2d0-6afa-4df6-9d4b-37ea15900379","Type":"ContainerStarted","Data":"7d861dca9ac89bea07d1e0610851a5266ffe8d79efae6c7fdb46c2de5cf18740"} Jan 31 05:38:00 crc kubenswrapper[5050]: I0131 05:38:00.412579 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-669699fbb-92tbj" Jan 31 05:38:00 crc kubenswrapper[5050]: I0131 05:38:00.430364 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-669699fbb-92tbj" podStartSLOduration=3.061061017 podStartE2EDuration="25.430348745s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:37:37.430529163 +0000 UTC m=+982.479690759" lastFinishedPulling="2026-01-31 05:37:59.799816891 +0000 UTC m=+1004.848978487" observedRunningTime="2026-01-31 05:38:00.427620911 +0000 UTC m=+1005.476782507" watchObservedRunningTime="2026-01-31 05:38:00.430348745 +0000 UTC m=+1005.479510331" Jan 31 05:38:05 crc kubenswrapper[5050]: I0131 05:38:05.993229 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5vs7" Jan 31 05:38:06 crc kubenswrapper[5050]: I0131 05:38:06.116229 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-669699fbb-92tbj" Jan 31 05:38:06 crc kubenswrapper[5050]: I0131 05:38:06.127700 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7sw6k" Jan 31 05:38:06 crc kubenswrapper[5050]: I0131 05:38:06.163077 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dg6bt" Jan 31 05:38:06 crc kubenswrapper[5050]: I0131 05:38:06.556383 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-v62tj" Jan 31 05:38:07 crc kubenswrapper[5050]: I0131 05:38:07.488246 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert\") pod \"infra-operator-controller-manager-79955696d6-v96rv\" (UID: \"702ce305-8b7b-445c-9d94-442b12074572\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:38:07 crc kubenswrapper[5050]: I0131 05:38:07.496887 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/702ce305-8b7b-445c-9d94-442b12074572-cert\") pod \"infra-operator-controller-manager-79955696d6-v96rv\" (UID: \"702ce305-8b7b-445c-9d94-442b12074572\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:38:07 crc kubenswrapper[5050]: I0131 05:38:07.691112 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:38:07 crc kubenswrapper[5050]: I0131 05:38:07.793355 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb\" (UID: \"1cb7e321-484b-42e4-a276-0d27a7c5fc95\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:38:07 crc kubenswrapper[5050]: I0131 05:38:07.805626 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cb7e321-484b-42e4-a276-0d27a7c5fc95-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb\" (UID: \"1cb7e321-484b-42e4-a276-0d27a7c5fc95\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:38:07 crc kubenswrapper[5050]: I0131 05:38:07.869010 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:38:08 crc kubenswrapper[5050]: I0131 05:38:08.200714 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-v96rv"] Jan 31 05:38:08 crc kubenswrapper[5050]: W0131 05:38:08.205975 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod702ce305_8b7b_445c_9d94_442b12074572.slice/crio-9bd85b047b3d06e12490501aa7671b2a011a922550dd33496e361c365379b950 WatchSource:0}: Error finding container 9bd85b047b3d06e12490501aa7671b2a011a922550dd33496e361c365379b950: Status 404 returned error can't find the container with id 9bd85b047b3d06e12490501aa7671b2a011a922550dd33496e361c365379b950 Jan 31 05:38:08 crc kubenswrapper[5050]: I0131 05:38:08.214813 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:38:08 crc kubenswrapper[5050]: I0131 05:38:08.214871 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:38:08 crc kubenswrapper[5050]: I0131 05:38:08.220864 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-webhook-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:38:08 crc kubenswrapper[5050]: I0131 05:38:08.221765 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5c20cf0-d535-4809-a555-7f439ebcc243-metrics-certs\") pod \"openstack-operator-controller-manager-6c7cc9dd76-c9qds\" (UID: \"a5c20cf0-d535-4809-a555-7f439ebcc243\") " pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:38:08 crc kubenswrapper[5050]: I0131 05:38:08.333836 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb"] Jan 31 05:38:08 crc kubenswrapper[5050]: I0131 05:38:08.433731 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:38:08 crc kubenswrapper[5050]: I0131 05:38:08.467019 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" event={"ID":"702ce305-8b7b-445c-9d94-442b12074572","Type":"ContainerStarted","Data":"9bd85b047b3d06e12490501aa7671b2a011a922550dd33496e361c365379b950"} Jan 31 05:38:08 crc kubenswrapper[5050]: I0131 05:38:08.468355 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" event={"ID":"1cb7e321-484b-42e4-a276-0d27a7c5fc95","Type":"ContainerStarted","Data":"6f222dbffe9867b3e822b8d16969404579ec92fa7334abad07870679df0023c4"} Jan 31 05:38:08 crc kubenswrapper[5050]: I0131 05:38:08.909397 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds"] Jan 31 05:38:09 crc kubenswrapper[5050]: I0131 05:38:09.017678 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:38:09 crc kubenswrapper[5050]: I0131 05:38:09.017733 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:38:09 crc kubenswrapper[5050]: I0131 05:38:09.017784 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:38:09 crc kubenswrapper[5050]: I0131 05:38:09.018370 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28ca310875e65cf5e9290eaf5b0d71245b16dc8b0b1ac33324bea4c715946d1f"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:38:09 crc kubenswrapper[5050]: I0131 05:38:09.018434 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://28ca310875e65cf5e9290eaf5b0d71245b16dc8b0b1ac33324bea4c715946d1f" gracePeriod=600 Jan 31 05:38:09 crc kubenswrapper[5050]: I0131 05:38:09.477758 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="28ca310875e65cf5e9290eaf5b0d71245b16dc8b0b1ac33324bea4c715946d1f" exitCode=0 Jan 31 05:38:09 crc kubenswrapper[5050]: I0131 05:38:09.477995 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"28ca310875e65cf5e9290eaf5b0d71245b16dc8b0b1ac33324bea4c715946d1f"} Jan 31 05:38:09 crc kubenswrapper[5050]: I0131 05:38:09.478236 5050 scope.go:117] "RemoveContainer" containerID="8fda1476157f97a2d389aaeaa03f696c709d711388e30f77ab369ecc733af733" Jan 31 05:38:09 crc kubenswrapper[5050]: I0131 05:38:09.479244 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" event={"ID":"a5c20cf0-d535-4809-a555-7f439ebcc243","Type":"ContainerStarted","Data":"91b8b001f6eb1002ad9c6f84c57a1ba9167400888d57e5f2e33f5fec9159ff6d"} Jan 31 05:38:09 crc kubenswrapper[5050]: I0131 05:38:09.479287 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" event={"ID":"a5c20cf0-d535-4809-a555-7f439ebcc243","Type":"ContainerStarted","Data":"94f546bc83fd15e99b456694730d5ebcae1db345f92f6c99c35c842a65bd3059"} Jan 31 05:38:10 crc kubenswrapper[5050]: I0131 05:38:10.489715 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:38:10 crc kubenswrapper[5050]: I0131 05:38:10.516203 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" podStartSLOduration=34.516177791 podStartE2EDuration="34.516177791s" podCreationTimestamp="2026-01-31 05:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:38:10.512249995 +0000 UTC m=+1015.561411581" watchObservedRunningTime="2026-01-31 05:38:10.516177791 +0000 UTC m=+1015.565339407" Jan 31 05:38:11 crc kubenswrapper[5050]: I0131 05:38:11.500084 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"37867fe0b3a3a54da7bcbf64f0d3572ca6af3a27ac44fef3f2c635dee432f98f"} Jan 31 05:38:13 crc kubenswrapper[5050]: I0131 05:38:13.523675 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" event={"ID":"1cb7e321-484b-42e4-a276-0d27a7c5fc95","Type":"ContainerStarted","Data":"09b75569bf86b17aa1710e6397d6f17e9472b2a3e91f1d7c292f60de7fe15d55"} Jan 31 05:38:13 crc kubenswrapper[5050]: I0131 05:38:13.524170 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:38:13 crc kubenswrapper[5050]: I0131 05:38:13.525548 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" event={"ID":"702ce305-8b7b-445c-9d94-442b12074572","Type":"ContainerStarted","Data":"17363f830e69845d6eccc47a337a71bc8dd326d39ffd8955fe0b6da82f1ff53b"} Jan 31 05:38:13 crc kubenswrapper[5050]: I0131 05:38:13.525712 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:38:13 crc kubenswrapper[5050]: I0131 05:38:13.567788 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" podStartSLOduration=33.723552783 podStartE2EDuration="38.567751701s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:38:08.331211699 +0000 UTC m=+1013.380373325" lastFinishedPulling="2026-01-31 05:38:13.175410607 +0000 UTC m=+1018.224572243" observedRunningTime="2026-01-31 05:38:13.563204518 +0000 UTC m=+1018.612366174" watchObservedRunningTime="2026-01-31 05:38:13.567751701 +0000 UTC m=+1018.616913337" Jan 31 05:38:13 crc kubenswrapper[5050]: I0131 05:38:13.614468 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" podStartSLOduration=33.661387992 podStartE2EDuration="38.614444415s" podCreationTimestamp="2026-01-31 05:37:35 +0000 UTC" firstStartedPulling="2026-01-31 05:38:08.212572931 +0000 UTC m=+1013.261734527" lastFinishedPulling="2026-01-31 05:38:13.165629334 +0000 UTC m=+1018.214790950" observedRunningTime="2026-01-31 05:38:13.596544704 +0000 UTC m=+1018.645706340" watchObservedRunningTime="2026-01-31 05:38:13.614444415 +0000 UTC m=+1018.663606041" Jan 31 05:38:18 crc kubenswrapper[5050]: I0131 05:38:18.443108 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6c7cc9dd76-c9qds" Jan 31 05:38:27 crc kubenswrapper[5050]: I0131 05:38:27.698234 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-v96rv" Jan 31 05:38:27 crc kubenswrapper[5050]: I0131 05:38:27.877330 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.282137 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hmfgf"] Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.284061 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.286093 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-nwtbn" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.286392 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.286549 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.286752 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.292552 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hmfgf"] Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.330999 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ctcn8"] Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.332027 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.334873 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.341458 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ctcn8"] Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.403721 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmjtb\" (UniqueName: \"kubernetes.io/projected/14292d47-83cd-4ef4-a097-4e1763e6b97b-kube-api-access-cmjtb\") pod \"dnsmasq-dns-675f4bcbfc-hmfgf\" (UID: \"14292d47-83cd-4ef4-a097-4e1763e6b97b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.403821 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2db9k\" (UniqueName: \"kubernetes.io/projected/97716da0-2bbc-4c60-ac61-c27c355a6f2f-kube-api-access-2db9k\") pod \"dnsmasq-dns-78dd6ddcc-ctcn8\" (UID: \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.403850 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14292d47-83cd-4ef4-a097-4e1763e6b97b-config\") pod \"dnsmasq-dns-675f4bcbfc-hmfgf\" (UID: \"14292d47-83cd-4ef4-a097-4e1763e6b97b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.403882 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97716da0-2bbc-4c60-ac61-c27c355a6f2f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ctcn8\" (UID: \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.404026 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97716da0-2bbc-4c60-ac61-c27c355a6f2f-config\") pod \"dnsmasq-dns-78dd6ddcc-ctcn8\" (UID: \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.505117 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97716da0-2bbc-4c60-ac61-c27c355a6f2f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ctcn8\" (UID: \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.505498 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97716da0-2bbc-4c60-ac61-c27c355a6f2f-config\") pod \"dnsmasq-dns-78dd6ddcc-ctcn8\" (UID: \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.505625 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmjtb\" (UniqueName: \"kubernetes.io/projected/14292d47-83cd-4ef4-a097-4e1763e6b97b-kube-api-access-cmjtb\") pod \"dnsmasq-dns-675f4bcbfc-hmfgf\" (UID: \"14292d47-83cd-4ef4-a097-4e1763e6b97b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.505771 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2db9k\" (UniqueName: \"kubernetes.io/projected/97716da0-2bbc-4c60-ac61-c27c355a6f2f-kube-api-access-2db9k\") pod \"dnsmasq-dns-78dd6ddcc-ctcn8\" (UID: \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.506208 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14292d47-83cd-4ef4-a097-4e1763e6b97b-config\") pod \"dnsmasq-dns-675f4bcbfc-hmfgf\" (UID: \"14292d47-83cd-4ef4-a097-4e1763e6b97b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.506690 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97716da0-2bbc-4c60-ac61-c27c355a6f2f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ctcn8\" (UID: \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.506816 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97716da0-2bbc-4c60-ac61-c27c355a6f2f-config\") pod \"dnsmasq-dns-78dd6ddcc-ctcn8\" (UID: \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.507031 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14292d47-83cd-4ef4-a097-4e1763e6b97b-config\") pod \"dnsmasq-dns-675f4bcbfc-hmfgf\" (UID: \"14292d47-83cd-4ef4-a097-4e1763e6b97b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.530721 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2db9k\" (UniqueName: \"kubernetes.io/projected/97716da0-2bbc-4c60-ac61-c27c355a6f2f-kube-api-access-2db9k\") pod \"dnsmasq-dns-78dd6ddcc-ctcn8\" (UID: \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.530923 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmjtb\" (UniqueName: \"kubernetes.io/projected/14292d47-83cd-4ef4-a097-4e1763e6b97b-kube-api-access-cmjtb\") pod \"dnsmasq-dns-675f4bcbfc-hmfgf\" (UID: \"14292d47-83cd-4ef4-a097-4e1763e6b97b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.609612 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.647746 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.871111 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hmfgf"] Jan 31 05:38:44 crc kubenswrapper[5050]: I0131 05:38:44.879756 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:38:45 crc kubenswrapper[5050]: I0131 05:38:45.208145 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ctcn8"] Jan 31 05:38:45 crc kubenswrapper[5050]: W0131 05:38:45.209132 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97716da0_2bbc_4c60_ac61_c27c355a6f2f.slice/crio-ef99860f3eacf4ebce6865dd27b01e6cb95b147d986ac3f824b897f3cbe50ef7 WatchSource:0}: Error finding container ef99860f3eacf4ebce6865dd27b01e6cb95b147d986ac3f824b897f3cbe50ef7: Status 404 returned error can't find the container with id ef99860f3eacf4ebce6865dd27b01e6cb95b147d986ac3f824b897f3cbe50ef7 Jan 31 05:38:45 crc kubenswrapper[5050]: I0131 05:38:45.801815 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" event={"ID":"14292d47-83cd-4ef4-a097-4e1763e6b97b","Type":"ContainerStarted","Data":"b3c895f6bef6d6e7d20b6948b3910b1607e5ae7589edf5acc91e81c1edad063f"} Jan 31 05:38:45 crc kubenswrapper[5050]: I0131 05:38:45.805047 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" event={"ID":"97716da0-2bbc-4c60-ac61-c27c355a6f2f","Type":"ContainerStarted","Data":"ef99860f3eacf4ebce6865dd27b01e6cb95b147d986ac3f824b897f3cbe50ef7"} Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.150316 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hmfgf"] Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.178820 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6d9q4"] Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.183366 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.200927 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6d9q4"] Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.251220 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-config\") pod \"dnsmasq-dns-666b6646f7-6d9q4\" (UID: \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\") " pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.251294 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6d9q4\" (UID: \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\") " pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.251320 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9bbn\" (UniqueName: \"kubernetes.io/projected/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-kube-api-access-h9bbn\") pod \"dnsmasq-dns-666b6646f7-6d9q4\" (UID: \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\") " pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.352460 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6d9q4\" (UID: \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\") " pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.352513 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9bbn\" (UniqueName: \"kubernetes.io/projected/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-kube-api-access-h9bbn\") pod \"dnsmasq-dns-666b6646f7-6d9q4\" (UID: \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\") " pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.352569 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-config\") pod \"dnsmasq-dns-666b6646f7-6d9q4\" (UID: \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\") " pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.353409 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-config\") pod \"dnsmasq-dns-666b6646f7-6d9q4\" (UID: \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\") " pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.354232 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6d9q4\" (UID: \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\") " pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.386170 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ctcn8"] Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.389205 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9bbn\" (UniqueName: \"kubernetes.io/projected/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-kube-api-access-h9bbn\") pod \"dnsmasq-dns-666b6646f7-6d9q4\" (UID: \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\") " pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.423974 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-887mr"] Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.424939 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.442017 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-887mr"] Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.514855 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.559478 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-887mr\" (UID: \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\") " pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.559860 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g94ks\" (UniqueName: \"kubernetes.io/projected/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-kube-api-access-g94ks\") pod \"dnsmasq-dns-57d769cc4f-887mr\" (UID: \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\") " pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.559884 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-config\") pod \"dnsmasq-dns-57d769cc4f-887mr\" (UID: \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\") " pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.662072 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g94ks\" (UniqueName: \"kubernetes.io/projected/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-kube-api-access-g94ks\") pod \"dnsmasq-dns-57d769cc4f-887mr\" (UID: \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\") " pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.662122 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-config\") pod \"dnsmasq-dns-57d769cc4f-887mr\" (UID: \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\") " pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.662196 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-887mr\" (UID: \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\") " pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.663233 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-config\") pod \"dnsmasq-dns-57d769cc4f-887mr\" (UID: \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\") " pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.663292 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-887mr\" (UID: \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\") " pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.698427 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g94ks\" (UniqueName: \"kubernetes.io/projected/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-kube-api-access-g94ks\") pod \"dnsmasq-dns-57d769cc4f-887mr\" (UID: \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\") " pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.752222 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:38:47 crc kubenswrapper[5050]: I0131 05:38:47.965591 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6d9q4"] Jan 31 05:38:47 crc kubenswrapper[5050]: W0131 05:38:47.969482 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34761c5e_f79a_4fe3_93a9_6a1084dc3a0f.slice/crio-1a6e16685b1f1af2264bdb2cddcc0d44690bbfb5458d84e08d82e933963d6744 WatchSource:0}: Error finding container 1a6e16685b1f1af2264bdb2cddcc0d44690bbfb5458d84e08d82e933963d6744: Status 404 returned error can't find the container with id 1a6e16685b1f1af2264bdb2cddcc0d44690bbfb5458d84e08d82e933963d6744 Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.206428 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-887mr"] Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.292252 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.293494 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.297756 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.297782 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.299411 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.299737 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ddl55" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.299769 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.299886 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.300021 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.309222 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.374015 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.374459 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.374498 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.374515 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk8ng\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-kube-api-access-hk8ng\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.374537 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.374573 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.374593 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.374620 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.374642 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-config-data\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.374673 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.374693 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.476101 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.476158 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk8ng\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-kube-api-access-hk8ng\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.476185 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.476205 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.476229 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.476261 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.476287 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-config-data\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.476323 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.476345 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.476383 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.476420 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.477278 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-config-data\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.477368 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.477875 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.478392 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.478596 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.480445 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.481531 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.481634 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.486622 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.488629 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.492630 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk8ng\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-kube-api-access-hk8ng\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.511792 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.567057 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.568263 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.579750 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.580195 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.580242 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.580462 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.580569 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.581110 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.581197 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.581464 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xnht7" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.621010 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.684701 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.684782 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.684803 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.684819 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.684841 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x67d\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-kube-api-access-6x67d\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.684870 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/faec33cd-ecd1-4244-abb0-c5a27441abd2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.684895 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.684919 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.684933 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/faec33cd-ecd1-4244-abb0-c5a27441abd2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.684970 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.684993 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.786592 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.786639 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.786654 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.786686 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x67d\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-kube-api-access-6x67d\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.786716 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/faec33cd-ecd1-4244-abb0-c5a27441abd2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.786745 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.786766 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.786788 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/faec33cd-ecd1-4244-abb0-c5a27441abd2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.786817 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.786836 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.786858 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.787086 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.789371 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.791129 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.791521 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.791672 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.792525 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.793828 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/faec33cd-ecd1-4244-abb0-c5a27441abd2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.794534 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.795352 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/faec33cd-ecd1-4244-abb0-c5a27441abd2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.797984 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.806822 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.813972 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x67d\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-kube-api-access-6x67d\") pod \"rabbitmq-cell1-server-0\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.852711 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" event={"ID":"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f","Type":"ContainerStarted","Data":"1a6e16685b1f1af2264bdb2cddcc0d44690bbfb5458d84e08d82e933963d6744"} Jan 31 05:38:48 crc kubenswrapper[5050]: I0131 05:38:48.907392 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.777185 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.779048 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.780638 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.785614 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2cn69" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.785752 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.786192 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.790326 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.800248 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.906106 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.906162 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.906188 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48ln\" (UniqueName: \"kubernetes.io/projected/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-kube-api-access-w48ln\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.906215 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.906244 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.906263 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.906335 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:49 crc kubenswrapper[5050]: I0131 05:38:49.906369 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.007223 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.007274 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.007362 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.007397 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.007420 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w48ln\" (UniqueName: \"kubernetes.io/projected/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-kube-api-access-w48ln\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.007446 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.007476 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.007497 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.010205 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.016504 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.016719 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.017193 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.021725 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.022028 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.027727 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.035245 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w48ln\" (UniqueName: \"kubernetes.io/projected/6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1-kube-api-access-w48ln\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.073476 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1\") " pod="openstack/openstack-galera-0" Jan 31 05:38:50 crc kubenswrapper[5050]: I0131 05:38:50.108419 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.128706 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.130990 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.135189 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.135352 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.135464 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.135602 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-hv8fb" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.141568 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.226308 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d6595e6-419a-4ade-8070-99a41d9c8204-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.226360 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6595e6-419a-4ade-8070-99a41d9c8204-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.226395 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d6595e6-419a-4ade-8070-99a41d9c8204-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.226430 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6595e6-419a-4ade-8070-99a41d9c8204-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.226458 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d6595e6-419a-4ade-8070-99a41d9c8204-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.226629 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzbxx\" (UniqueName: \"kubernetes.io/projected/9d6595e6-419a-4ade-8070-99a41d9c8204-kube-api-access-zzbxx\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.226759 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.226812 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d6595e6-419a-4ade-8070-99a41d9c8204-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.328675 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6595e6-419a-4ade-8070-99a41d9c8204-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.328730 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d6595e6-419a-4ade-8070-99a41d9c8204-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.328770 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6595e6-419a-4ade-8070-99a41d9c8204-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.328801 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d6595e6-419a-4ade-8070-99a41d9c8204-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.328835 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzbxx\" (UniqueName: \"kubernetes.io/projected/9d6595e6-419a-4ade-8070-99a41d9c8204-kube-api-access-zzbxx\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.328861 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.328878 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d6595e6-419a-4ade-8070-99a41d9c8204-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.328901 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d6595e6-419a-4ade-8070-99a41d9c8204-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.329285 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d6595e6-419a-4ade-8070-99a41d9c8204-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.329420 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.330284 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d6595e6-419a-4ade-8070-99a41d9c8204-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.330706 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d6595e6-419a-4ade-8070-99a41d9c8204-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.331365 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d6595e6-419a-4ade-8070-99a41d9c8204-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.333506 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6595e6-419a-4ade-8070-99a41d9c8204-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.348664 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6595e6-419a-4ade-8070-99a41d9c8204-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.350646 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.359889 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzbxx\" (UniqueName: \"kubernetes.io/projected/9d6595e6-419a-4ade-8070-99a41d9c8204-kube-api-access-zzbxx\") pod \"openstack-cell1-galera-0\" (UID: \"9d6595e6-419a-4ade-8070-99a41d9c8204\") " pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.398250 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.399124 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.407354 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-pwg65" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.407574 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.407744 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.425809 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.430577 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlbqq\" (UniqueName: \"kubernetes.io/projected/92f101f3-10e7-4e7f-a980-ce6a40e6e042-kube-api-access-vlbqq\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.430646 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92f101f3-10e7-4e7f-a980-ce6a40e6e042-kolla-config\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.430666 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92f101f3-10e7-4e7f-a980-ce6a40e6e042-config-data\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.430684 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f101f3-10e7-4e7f-a980-ce6a40e6e042-combined-ca-bundle\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.430739 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f101f3-10e7-4e7f-a980-ce6a40e6e042-memcached-tls-certs\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.452939 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.532254 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f101f3-10e7-4e7f-a980-ce6a40e6e042-memcached-tls-certs\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.532594 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlbqq\" (UniqueName: \"kubernetes.io/projected/92f101f3-10e7-4e7f-a980-ce6a40e6e042-kube-api-access-vlbqq\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.532642 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92f101f3-10e7-4e7f-a980-ce6a40e6e042-kolla-config\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.532663 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92f101f3-10e7-4e7f-a980-ce6a40e6e042-config-data\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.532679 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f101f3-10e7-4e7f-a980-ce6a40e6e042-combined-ca-bundle\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.533417 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92f101f3-10e7-4e7f-a980-ce6a40e6e042-kolla-config\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.533574 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92f101f3-10e7-4e7f-a980-ce6a40e6e042-config-data\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.536246 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f101f3-10e7-4e7f-a980-ce6a40e6e042-memcached-tls-certs\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.549632 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f101f3-10e7-4e7f-a980-ce6a40e6e042-combined-ca-bundle\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.560486 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlbqq\" (UniqueName: \"kubernetes.io/projected/92f101f3-10e7-4e7f-a980-ce6a40e6e042-kube-api-access-vlbqq\") pod \"memcached-0\" (UID: \"92f101f3-10e7-4e7f-a980-ce6a40e6e042\") " pod="openstack/memcached-0" Jan 31 05:38:51 crc kubenswrapper[5050]: I0131 05:38:51.719818 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 05:38:52 crc kubenswrapper[5050]: W0131 05:38:52.685236 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ea42cc2_c35c_46b1_adb3_9cc699bbd9b4.slice/crio-fbbe1c75585047ffdaaaefe96c9e8dd91dddb3ca26b42138e7ea3af238174c73 WatchSource:0}: Error finding container fbbe1c75585047ffdaaaefe96c9e8dd91dddb3ca26b42138e7ea3af238174c73: Status 404 returned error can't find the container with id fbbe1c75585047ffdaaaefe96c9e8dd91dddb3ca26b42138e7ea3af238174c73 Jan 31 05:38:52 crc kubenswrapper[5050]: I0131 05:38:52.898494 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" event={"ID":"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4","Type":"ContainerStarted","Data":"fbbe1c75585047ffdaaaefe96c9e8dd91dddb3ca26b42138e7ea3af238174c73"} Jan 31 05:38:53 crc kubenswrapper[5050]: I0131 05:38:53.273975 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:38:53 crc kubenswrapper[5050]: I0131 05:38:53.275115 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 05:38:53 crc kubenswrapper[5050]: I0131 05:38:53.277538 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-q87c2" Jan 31 05:38:53 crc kubenswrapper[5050]: I0131 05:38:53.284881 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:38:53 crc kubenswrapper[5050]: I0131 05:38:53.358671 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnxsz\" (UniqueName: \"kubernetes.io/projected/0afb5f3d-b148-46fd-9867-071aafa5adff-kube-api-access-wnxsz\") pod \"kube-state-metrics-0\" (UID: \"0afb5f3d-b148-46fd-9867-071aafa5adff\") " pod="openstack/kube-state-metrics-0" Jan 31 05:38:53 crc kubenswrapper[5050]: I0131 05:38:53.460115 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnxsz\" (UniqueName: \"kubernetes.io/projected/0afb5f3d-b148-46fd-9867-071aafa5adff-kube-api-access-wnxsz\") pod \"kube-state-metrics-0\" (UID: \"0afb5f3d-b148-46fd-9867-071aafa5adff\") " pod="openstack/kube-state-metrics-0" Jan 31 05:38:53 crc kubenswrapper[5050]: I0131 05:38:53.483973 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnxsz\" (UniqueName: \"kubernetes.io/projected/0afb5f3d-b148-46fd-9867-071aafa5adff-kube-api-access-wnxsz\") pod \"kube-state-metrics-0\" (UID: \"0afb5f3d-b148-46fd-9867-071aafa5adff\") " pod="openstack/kube-state-metrics-0" Jan 31 05:38:53 crc kubenswrapper[5050]: I0131 05:38:53.592152 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.345608 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-grlfx"] Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.346815 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.348299 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-chfld" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.353820 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.354785 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.362536 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-p2rnn"] Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.366299 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.396946 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-p2rnn"] Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.415848 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-grlfx"] Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.424760 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw98w\" (UniqueName: \"kubernetes.io/projected/23898a5e-f7c6-473b-a882-c91ed8ff2e06-kube-api-access-cw98w\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.424982 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23898a5e-f7c6-473b-a882-c91ed8ff2e06-scripts\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.425025 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eca93ff-3985-4e89-9254-a5d2a94793d6-scripts\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.425119 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eca93ff-3985-4e89-9254-a5d2a94793d6-combined-ca-bundle\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.425154 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eca93ff-3985-4e89-9254-a5d2a94793d6-ovn-controller-tls-certs\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.425246 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23898a5e-f7c6-473b-a882-c91ed8ff2e06-var-run\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.425283 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5eca93ff-3985-4e89-9254-a5d2a94793d6-var-run-ovn\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.425380 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/23898a5e-f7c6-473b-a882-c91ed8ff2e06-var-lib\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.425398 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7crbg\" (UniqueName: \"kubernetes.io/projected/5eca93ff-3985-4e89-9254-a5d2a94793d6-kube-api-access-7crbg\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.425450 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5eca93ff-3985-4e89-9254-a5d2a94793d6-var-log-ovn\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.425505 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5eca93ff-3985-4e89-9254-a5d2a94793d6-var-run\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.426011 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/23898a5e-f7c6-473b-a882-c91ed8ff2e06-etc-ovs\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.426062 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/23898a5e-f7c6-473b-a882-c91ed8ff2e06-var-log\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527197 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eca93ff-3985-4e89-9254-a5d2a94793d6-combined-ca-bundle\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527241 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eca93ff-3985-4e89-9254-a5d2a94793d6-ovn-controller-tls-certs\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527267 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5eca93ff-3985-4e89-9254-a5d2a94793d6-var-run-ovn\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527283 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23898a5e-f7c6-473b-a882-c91ed8ff2e06-var-run\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527336 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/23898a5e-f7c6-473b-a882-c91ed8ff2e06-var-lib\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527355 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7crbg\" (UniqueName: \"kubernetes.io/projected/5eca93ff-3985-4e89-9254-a5d2a94793d6-kube-api-access-7crbg\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527374 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5eca93ff-3985-4e89-9254-a5d2a94793d6-var-log-ovn\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527400 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5eca93ff-3985-4e89-9254-a5d2a94793d6-var-run\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527441 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/23898a5e-f7c6-473b-a882-c91ed8ff2e06-etc-ovs\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527462 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/23898a5e-f7c6-473b-a882-c91ed8ff2e06-var-log\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527483 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw98w\" (UniqueName: \"kubernetes.io/projected/23898a5e-f7c6-473b-a882-c91ed8ff2e06-kube-api-access-cw98w\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527515 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23898a5e-f7c6-473b-a882-c91ed8ff2e06-scripts\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.527537 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eca93ff-3985-4e89-9254-a5d2a94793d6-scripts\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.528112 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5eca93ff-3985-4e89-9254-a5d2a94793d6-var-run-ovn\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.529471 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5eca93ff-3985-4e89-9254-a5d2a94793d6-var-run\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.529858 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23898a5e-f7c6-473b-a882-c91ed8ff2e06-var-run\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.529993 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/23898a5e-f7c6-473b-a882-c91ed8ff2e06-var-log\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.529991 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/23898a5e-f7c6-473b-a882-c91ed8ff2e06-var-lib\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.530081 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5eca93ff-3985-4e89-9254-a5d2a94793d6-var-log-ovn\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.530291 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/23898a5e-f7c6-473b-a882-c91ed8ff2e06-etc-ovs\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.533029 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23898a5e-f7c6-473b-a882-c91ed8ff2e06-scripts\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.540915 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eca93ff-3985-4e89-9254-a5d2a94793d6-scripts\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.541339 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eca93ff-3985-4e89-9254-a5d2a94793d6-combined-ca-bundle\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.542724 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eca93ff-3985-4e89-9254-a5d2a94793d6-ovn-controller-tls-certs\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.548177 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw98w\" (UniqueName: \"kubernetes.io/projected/23898a5e-f7c6-473b-a882-c91ed8ff2e06-kube-api-access-cw98w\") pod \"ovn-controller-ovs-p2rnn\" (UID: \"23898a5e-f7c6-473b-a882-c91ed8ff2e06\") " pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.550378 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7crbg\" (UniqueName: \"kubernetes.io/projected/5eca93ff-3985-4e89-9254-a5d2a94793d6-kube-api-access-7crbg\") pod \"ovn-controller-grlfx\" (UID: \"5eca93ff-3985-4e89-9254-a5d2a94793d6\") " pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.664369 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grlfx" Jan 31 05:38:57 crc kubenswrapper[5050]: I0131 05:38:57.691664 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.323328 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.331776 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.334575 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.335006 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.335184 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.335254 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8rnq7" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.335557 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.336255 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.440760 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.440824 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-config\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.440863 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxtd8\" (UniqueName: \"kubernetes.io/projected/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-kube-api-access-wxtd8\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.440892 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.440918 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.440989 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.441015 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.441087 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.542818 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.542902 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.542929 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-config\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.542982 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxtd8\" (UniqueName: \"kubernetes.io/projected/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-kube-api-access-wxtd8\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.543006 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.543029 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.543075 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.543102 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.543666 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.544803 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.551690 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-config\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.551886 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.552512 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.553668 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.567185 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.574015 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxtd8\" (UniqueName: \"kubernetes.io/projected/44932166-fbc5-41a4-bdf6-a3931dcbe9f0-kube-api-access-wxtd8\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.579869 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"44932166-fbc5-41a4-bdf6-a3931dcbe9f0\") " pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:58 crc kubenswrapper[5050]: I0131 05:38:58.655388 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.712921 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.716319 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.723305 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.725863 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.725978 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.726150 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-rffw8" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.726248 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.761553 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.761713 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.761809 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.761875 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-config\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.762104 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.762175 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djc4s\" (UniqueName: \"kubernetes.io/projected/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-kube-api-access-djc4s\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.762227 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.762275 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.865710 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.866701 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.866861 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.866994 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.867028 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-config\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.867611 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.867703 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djc4s\" (UniqueName: \"kubernetes.io/projected/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-kube-api-access-djc4s\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.867983 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.868823 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.868895 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.869291 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-config\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.871087 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.872333 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.872635 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.872979 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.894754 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djc4s\" (UniqueName: \"kubernetes.io/projected/0c3ec6f4-fbc1-40cd-bbcc-a3910770af49-kube-api-access-djc4s\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:38:59 crc kubenswrapper[5050]: I0131 05:38:59.899646 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49\") " pod="openstack/ovsdbserver-sb-0" Jan 31 05:39:00 crc kubenswrapper[5050]: I0131 05:39:00.053906 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 05:39:01 crc kubenswrapper[5050]: E0131 05:39:01.284975 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 05:39:01 crc kubenswrapper[5050]: E0131 05:39:01.285749 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmjtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-hmfgf_openstack(14292d47-83cd-4ef4-a097-4e1763e6b97b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 05:39:01 crc kubenswrapper[5050]: E0131 05:39:01.287467 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" podUID="14292d47-83cd-4ef4-a097-4e1763e6b97b" Jan 31 05:39:01 crc kubenswrapper[5050]: E0131 05:39:01.348026 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 05:39:01 crc kubenswrapper[5050]: E0131 05:39:01.348258 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2db9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-ctcn8_openstack(97716da0-2bbc-4c60-ac61-c27c355a6f2f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 05:39:01 crc kubenswrapper[5050]: E0131 05:39:01.350108 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" podUID="97716da0-2bbc-4c60-ac61-c27c355a6f2f" Jan 31 05:39:01 crc kubenswrapper[5050]: I0131 05:39:01.812449 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 05:39:01 crc kubenswrapper[5050]: W0131 05:39:01.856333 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d6595e6_419a_4ade_8070_99a41d9c8204.slice/crio-01f73c8692181f852e58180ee450b392b0c1e3807a207b91351f3b6d150e4933 WatchSource:0}: Error finding container 01f73c8692181f852e58180ee450b392b0c1e3807a207b91351f3b6d150e4933: Status 404 returned error can't find the container with id 01f73c8692181f852e58180ee450b392b0c1e3807a207b91351f3b6d150e4933 Jan 31 05:39:01 crc kubenswrapper[5050]: I0131 05:39:01.967324 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 05:39:01 crc kubenswrapper[5050]: I0131 05:39:01.972820 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:39:01 crc kubenswrapper[5050]: W0131 05:39:01.978476 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaec33cd_ecd1_4244_abb0_c5a27441abd2.slice/crio-c737df552da7e8b2db293133ae47c07759c7160574545979c12843ffbdef1eb2 WatchSource:0}: Error finding container c737df552da7e8b2db293133ae47c07759c7160574545979c12843ffbdef1eb2: Status 404 returned error can't find the container with id c737df552da7e8b2db293133ae47c07759c7160574545979c12843ffbdef1eb2 Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.010611 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"faec33cd-ecd1-4244-abb0-c5a27441abd2","Type":"ContainerStarted","Data":"c737df552da7e8b2db293133ae47c07759c7160574545979c12843ffbdef1eb2"} Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.011798 5050 generic.go:334] "Generic (PLEG): container finished" podID="34761c5e-f79a-4fe3-93a9-6a1084dc3a0f" containerID="01e9eb7d0898ae275c35c0db843dedd39b55fd001e8b65e58f4bb110a1445ed1" exitCode=0 Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.011866 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" event={"ID":"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f","Type":"ContainerDied","Data":"01e9eb7d0898ae275c35c0db843dedd39b55fd001e8b65e58f4bb110a1445ed1"} Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.012634 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9d6595e6-419a-4ade-8070-99a41d9c8204","Type":"ContainerStarted","Data":"01f73c8692181f852e58180ee450b392b0c1e3807a207b91351f3b6d150e4933"} Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.013481 5050 generic.go:334] "Generic (PLEG): container finished" podID="1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" containerID="853b5983511ee2cc09ef4cf7061b5ae11b9f7fcda1673033e3538e2463a00f00" exitCode=0 Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.013523 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" event={"ID":"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4","Type":"ContainerDied","Data":"853b5983511ee2cc09ef4cf7061b5ae11b9f7fcda1673033e3538e2463a00f00"} Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.015076 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1","Type":"ContainerStarted","Data":"63dfa0b3ca875bd42385d5810f14b1fbf7fb487b3d66d4ee0150c8f962587827"} Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.116081 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-grlfx"] Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.145711 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.192480 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.206931 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 05:39:02 crc kubenswrapper[5050]: W0131 05:39:02.220719 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3fa70dc_40c9_4b8a_8239_d785f140d5d2.slice/crio-65a8d4e33ab1ae577eb8a76e29128f4020b9ddc80d3efa7f44623d0edbc34290 WatchSource:0}: Error finding container 65a8d4e33ab1ae577eb8a76e29128f4020b9ddc80d3efa7f44623d0edbc34290: Status 404 returned error can't find the container with id 65a8d4e33ab1ae577eb8a76e29128f4020b9ddc80d3efa7f44623d0edbc34290 Jan 31 05:39:02 crc kubenswrapper[5050]: W0131 05:39:02.223968 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92f101f3_10e7_4e7f_a980_ce6a40e6e042.slice/crio-3246f368f7e54de7bb84ff2ae8f68479867898f88da33e3c30f5ed12a623e51d WatchSource:0}: Error finding container 3246f368f7e54de7bb84ff2ae8f68479867898f88da33e3c30f5ed12a623e51d: Status 404 returned error can't find the container with id 3246f368f7e54de7bb84ff2ae8f68479867898f88da33e3c30f5ed12a623e51d Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.321011 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 05:39:02 crc kubenswrapper[5050]: W0131 05:39:02.349426 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44932166_fbc5_41a4_bdf6_a3931dcbe9f0.slice/crio-1054313bfaa378f7160c6093e3ea8ac055b662987049eb4fd62d120a0284ed62 WatchSource:0}: Error finding container 1054313bfaa378f7160c6093e3ea8ac055b662987049eb4fd62d120a0284ed62: Status 404 returned error can't find the container with id 1054313bfaa378f7160c6093e3ea8ac055b662987049eb4fd62d120a0284ed62 Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.429140 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 05:39:02 crc kubenswrapper[5050]: W0131 05:39:02.436031 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c3ec6f4_fbc1_40cd_bbcc_a3910770af49.slice/crio-7aba6e4e458ddd1097b658c43c3c094de7d5c4e27d19254eab57dc2ff048b32b WatchSource:0}: Error finding container 7aba6e4e458ddd1097b658c43c3c094de7d5c4e27d19254eab57dc2ff048b32b: Status 404 returned error can't find the container with id 7aba6e4e458ddd1097b658c43c3c094de7d5c4e27d19254eab57dc2ff048b32b Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.517879 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.523734 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.612846 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2db9k\" (UniqueName: \"kubernetes.io/projected/97716da0-2bbc-4c60-ac61-c27c355a6f2f-kube-api-access-2db9k\") pod \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\" (UID: \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\") " Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.612903 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97716da0-2bbc-4c60-ac61-c27c355a6f2f-dns-svc\") pod \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\" (UID: \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\") " Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.612935 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14292d47-83cd-4ef4-a097-4e1763e6b97b-config\") pod \"14292d47-83cd-4ef4-a097-4e1763e6b97b\" (UID: \"14292d47-83cd-4ef4-a097-4e1763e6b97b\") " Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.613515 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97716da0-2bbc-4c60-ac61-c27c355a6f2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "97716da0-2bbc-4c60-ac61-c27c355a6f2f" (UID: "97716da0-2bbc-4c60-ac61-c27c355a6f2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.613552 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14292d47-83cd-4ef4-a097-4e1763e6b97b-config" (OuterVolumeSpecName: "config") pod "14292d47-83cd-4ef4-a097-4e1763e6b97b" (UID: "14292d47-83cd-4ef4-a097-4e1763e6b97b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.613569 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmjtb\" (UniqueName: \"kubernetes.io/projected/14292d47-83cd-4ef4-a097-4e1763e6b97b-kube-api-access-cmjtb\") pod \"14292d47-83cd-4ef4-a097-4e1763e6b97b\" (UID: \"14292d47-83cd-4ef4-a097-4e1763e6b97b\") " Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.613697 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97716da0-2bbc-4c60-ac61-c27c355a6f2f-config\") pod \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\" (UID: \"97716da0-2bbc-4c60-ac61-c27c355a6f2f\") " Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.614096 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97716da0-2bbc-4c60-ac61-c27c355a6f2f-config" (OuterVolumeSpecName: "config") pod "97716da0-2bbc-4c60-ac61-c27c355a6f2f" (UID: "97716da0-2bbc-4c60-ac61-c27c355a6f2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.614257 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97716da0-2bbc-4c60-ac61-c27c355a6f2f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.614269 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14292d47-83cd-4ef4-a097-4e1763e6b97b-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.614278 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97716da0-2bbc-4c60-ac61-c27c355a6f2f-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.618458 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97716da0-2bbc-4c60-ac61-c27c355a6f2f-kube-api-access-2db9k" (OuterVolumeSpecName: "kube-api-access-2db9k") pod "97716da0-2bbc-4c60-ac61-c27c355a6f2f" (UID: "97716da0-2bbc-4c60-ac61-c27c355a6f2f"). InnerVolumeSpecName "kube-api-access-2db9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.618487 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14292d47-83cd-4ef4-a097-4e1763e6b97b-kube-api-access-cmjtb" (OuterVolumeSpecName: "kube-api-access-cmjtb") pod "14292d47-83cd-4ef4-a097-4e1763e6b97b" (UID: "14292d47-83cd-4ef4-a097-4e1763e6b97b"). InnerVolumeSpecName "kube-api-access-cmjtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.715686 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2db9k\" (UniqueName: \"kubernetes.io/projected/97716da0-2bbc-4c60-ac61-c27c355a6f2f-kube-api-access-2db9k\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.715715 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmjtb\" (UniqueName: \"kubernetes.io/projected/14292d47-83cd-4ef4-a097-4e1763e6b97b-kube-api-access-cmjtb\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:02 crc kubenswrapper[5050]: I0131 05:39:02.914611 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-p2rnn"] Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.025764 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" event={"ID":"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4","Type":"ContainerStarted","Data":"4d609a62dbf8fba74f527ceeaba39b464ee97bea3280501c7824b1276df8c292"} Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.025880 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.028761 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49","Type":"ContainerStarted","Data":"7aba6e4e458ddd1097b658c43c3c094de7d5c4e27d19254eab57dc2ff048b32b"} Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.030901 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" event={"ID":"14292d47-83cd-4ef4-a097-4e1763e6b97b","Type":"ContainerDied","Data":"b3c895f6bef6d6e7d20b6948b3910b1607e5ae7589edf5acc91e81c1edad063f"} Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.031236 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hmfgf" Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.037904 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.037913 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ctcn8" event={"ID":"97716da0-2bbc-4c60-ac61-c27c355a6f2f","Type":"ContainerDied","Data":"ef99860f3eacf4ebce6865dd27b01e6cb95b147d986ac3f824b897f3cbe50ef7"} Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.039410 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"92f101f3-10e7-4e7f-a980-ce6a40e6e042","Type":"ContainerStarted","Data":"3246f368f7e54de7bb84ff2ae8f68479867898f88da33e3c30f5ed12a623e51d"} Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.045641 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" podStartSLOduration=7.280983512 podStartE2EDuration="16.045619898s" podCreationTimestamp="2026-01-31 05:38:47 +0000 UTC" firstStartedPulling="2026-01-31 05:38:52.69217649 +0000 UTC m=+1057.741338086" lastFinishedPulling="2026-01-31 05:39:01.456812876 +0000 UTC m=+1066.505974472" observedRunningTime="2026-01-31 05:39:03.043616975 +0000 UTC m=+1068.092778571" watchObservedRunningTime="2026-01-31 05:39:03.045619898 +0000 UTC m=+1068.094781494" Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.047976 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"44932166-fbc5-41a4-bdf6-a3931dcbe9f0","Type":"ContainerStarted","Data":"1054313bfaa378f7160c6093e3ea8ac055b662987049eb4fd62d120a0284ed62"} Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.050388 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0afb5f3d-b148-46fd-9867-071aafa5adff","Type":"ContainerStarted","Data":"ffbe3b5ec81ac721edc51f7504fb1f8e30215cd65261227559da489b2db2eff9"} Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.052223 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3fa70dc-40c9-4b8a-8239-d785f140d5d2","Type":"ContainerStarted","Data":"65a8d4e33ab1ae577eb8a76e29128f4020b9ddc80d3efa7f44623d0edbc34290"} Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.055656 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grlfx" event={"ID":"5eca93ff-3985-4e89-9254-a5d2a94793d6","Type":"ContainerStarted","Data":"1bcbd78f60ac93c985544fde56e5205a6f625a020b701cd73cc806ab923825f6"} Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.094032 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hmfgf"] Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.096764 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hmfgf"] Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.127820 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ctcn8"] Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.136004 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ctcn8"] Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.753315 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14292d47-83cd-4ef4-a097-4e1763e6b97b" path="/var/lib/kubelet/pods/14292d47-83cd-4ef4-a097-4e1763e6b97b/volumes" Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.753666 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97716da0-2bbc-4c60-ac61-c27c355a6f2f" path="/var/lib/kubelet/pods/97716da0-2bbc-4c60-ac61-c27c355a6f2f/volumes" Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.948068 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-7ddmz"] Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.966172 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.970327 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 31 05:39:03 crc kubenswrapper[5050]: I0131 05:39:03.970609 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7ddmz"] Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.050772 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/82b2b313-a37f-4405-a49a-456f3c88ceb3-ovn-rundir\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.050833 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htslh\" (UniqueName: \"kubernetes.io/projected/82b2b313-a37f-4405-a49a-456f3c88ceb3-kube-api-access-htslh\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.050874 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b2b313-a37f-4405-a49a-456f3c88ceb3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.050896 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b2b313-a37f-4405-a49a-456f3c88ceb3-config\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.050983 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/82b2b313-a37f-4405-a49a-456f3c88ceb3-ovs-rundir\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.051007 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b2b313-a37f-4405-a49a-456f3c88ceb3-combined-ca-bundle\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.068051 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-887mr"] Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.077750 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p2rnn" event={"ID":"23898a5e-f7c6-473b-a882-c91ed8ff2e06","Type":"ContainerStarted","Data":"02ecfb3d830ee5183d84e2d161c3d07777ea710be38b38fb02973aa0f74c8208"} Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.101010 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l9flz"] Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.102808 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.106085 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.113690 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l9flz"] Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.152740 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/82b2b313-a37f-4405-a49a-456f3c88ceb3-ovn-rundir\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.152812 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htslh\" (UniqueName: \"kubernetes.io/projected/82b2b313-a37f-4405-a49a-456f3c88ceb3-kube-api-access-htslh\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.152862 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b2b313-a37f-4405-a49a-456f3c88ceb3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.152883 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b2b313-a37f-4405-a49a-456f3c88ceb3-config\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.152971 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/82b2b313-a37f-4405-a49a-456f3c88ceb3-ovs-rundir\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.152996 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b2b313-a37f-4405-a49a-456f3c88ceb3-combined-ca-bundle\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.153965 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/82b2b313-a37f-4405-a49a-456f3c88ceb3-ovn-rundir\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.155617 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/82b2b313-a37f-4405-a49a-456f3c88ceb3-ovs-rundir\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.156076 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b2b313-a37f-4405-a49a-456f3c88ceb3-config\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.163396 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b2b313-a37f-4405-a49a-456f3c88ceb3-combined-ca-bundle\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.163676 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b2b313-a37f-4405-a49a-456f3c88ceb3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.176295 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htslh\" (UniqueName: \"kubernetes.io/projected/82b2b313-a37f-4405-a49a-456f3c88ceb3-kube-api-access-htslh\") pod \"ovn-controller-metrics-7ddmz\" (UID: \"82b2b313-a37f-4405-a49a-456f3c88ceb3\") " pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.249786 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6d9q4"] Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.254310 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-l9flz\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.254353 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-l9flz\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.254384 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfq5v\" (UniqueName: \"kubernetes.io/projected/54414e79-dc7f-4d64-a805-7d72846b9e28-kube-api-access-rfq5v\") pod \"dnsmasq-dns-7fd796d7df-l9flz\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.254585 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-config\") pod \"dnsmasq-dns-7fd796d7df-l9flz\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.279130 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-qdgd8"] Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.280280 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.289938 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.291327 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-qdgd8"] Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.307301 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7ddmz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.356820 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2r6t\" (UniqueName: \"kubernetes.io/projected/074ca3df-49a5-4075-ab96-377ea6feae84-kube-api-access-m2r6t\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.356874 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.356913 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-config\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.356972 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-l9flz\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.357014 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-l9flz\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.357052 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfq5v\" (UniqueName: \"kubernetes.io/projected/54414e79-dc7f-4d64-a805-7d72846b9e28-kube-api-access-rfq5v\") pod \"dnsmasq-dns-7fd796d7df-l9flz\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.357084 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.357126 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.357151 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-config\") pod \"dnsmasq-dns-7fd796d7df-l9flz\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.358170 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-config\") pod \"dnsmasq-dns-7fd796d7df-l9flz\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.358296 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-l9flz\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.359257 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-l9flz\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.380932 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfq5v\" (UniqueName: \"kubernetes.io/projected/54414e79-dc7f-4d64-a805-7d72846b9e28-kube-api-access-rfq5v\") pod \"dnsmasq-dns-7fd796d7df-l9flz\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.427395 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.458894 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.458998 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.459081 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2r6t\" (UniqueName: \"kubernetes.io/projected/074ca3df-49a5-4075-ab96-377ea6feae84-kube-api-access-m2r6t\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.459114 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.459149 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-config\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.459786 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.460132 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.460422 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.460458 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-config\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.475772 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2r6t\" (UniqueName: \"kubernetes.io/projected/074ca3df-49a5-4075-ab96-377ea6feae84-kube-api-access-m2r6t\") pod \"dnsmasq-dns-86db49b7ff-qdgd8\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:04 crc kubenswrapper[5050]: I0131 05:39:04.617235 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:05 crc kubenswrapper[5050]: I0131 05:39:05.083180 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" podUID="1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" containerName="dnsmasq-dns" containerID="cri-o://4d609a62dbf8fba74f527ceeaba39b464ee97bea3280501c7824b1276df8c292" gracePeriod=10 Jan 31 05:39:06 crc kubenswrapper[5050]: I0131 05:39:06.095055 5050 generic.go:334] "Generic (PLEG): container finished" podID="1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" containerID="4d609a62dbf8fba74f527ceeaba39b464ee97bea3280501c7824b1276df8c292" exitCode=0 Jan 31 05:39:06 crc kubenswrapper[5050]: I0131 05:39:06.095148 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" event={"ID":"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4","Type":"ContainerDied","Data":"4d609a62dbf8fba74f527ceeaba39b464ee97bea3280501c7824b1276df8c292"} Jan 31 05:39:09 crc kubenswrapper[5050]: I0131 05:39:09.812581 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:39:09 crc kubenswrapper[5050]: I0131 05:39:09.945236 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-dns-svc\") pod \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\" (UID: \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\") " Jan 31 05:39:09 crc kubenswrapper[5050]: I0131 05:39:09.945328 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g94ks\" (UniqueName: \"kubernetes.io/projected/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-kube-api-access-g94ks\") pod \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\" (UID: \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\") " Jan 31 05:39:09 crc kubenswrapper[5050]: I0131 05:39:09.945363 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-config\") pod \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\" (UID: \"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4\") " Jan 31 05:39:09 crc kubenswrapper[5050]: I0131 05:39:09.950217 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-kube-api-access-g94ks" (OuterVolumeSpecName: "kube-api-access-g94ks") pod "1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" (UID: "1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4"). InnerVolumeSpecName "kube-api-access-g94ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:09 crc kubenswrapper[5050]: I0131 05:39:09.979020 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" (UID: "1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:09 crc kubenswrapper[5050]: I0131 05:39:09.986690 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-config" (OuterVolumeSpecName: "config") pod "1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" (UID: "1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:10 crc kubenswrapper[5050]: I0131 05:39:10.048052 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:10 crc kubenswrapper[5050]: I0131 05:39:10.048101 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:10 crc kubenswrapper[5050]: I0131 05:39:10.048118 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g94ks\" (UniqueName: \"kubernetes.io/projected/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4-kube-api-access-g94ks\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:10 crc kubenswrapper[5050]: I0131 05:39:10.135630 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" event={"ID":"1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4","Type":"ContainerDied","Data":"fbbe1c75585047ffdaaaefe96c9e8dd91dddb3ca26b42138e7ea3af238174c73"} Jan 31 05:39:10 crc kubenswrapper[5050]: I0131 05:39:10.135727 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" Jan 31 05:39:10 crc kubenswrapper[5050]: I0131 05:39:10.135712 5050 scope.go:117] "RemoveContainer" containerID="4d609a62dbf8fba74f527ceeaba39b464ee97bea3280501c7824b1276df8c292" Jan 31 05:39:10 crc kubenswrapper[5050]: I0131 05:39:10.183594 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-887mr"] Jan 31 05:39:10 crc kubenswrapper[5050]: I0131 05:39:10.190847 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-887mr"] Jan 31 05:39:10 crc kubenswrapper[5050]: I0131 05:39:10.815773 5050 scope.go:117] "RemoveContainer" containerID="853b5983511ee2cc09ef4cf7061b5ae11b9f7fcda1673033e3538e2463a00f00" Jan 31 05:39:11 crc kubenswrapper[5050]: I0131 05:39:11.066548 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7ddmz"] Jan 31 05:39:11 crc kubenswrapper[5050]: I0131 05:39:11.144722 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7ddmz" event={"ID":"82b2b313-a37f-4405-a49a-456f3c88ceb3","Type":"ContainerStarted","Data":"770def12f570d1abe1c9065a01a242f905ce6c77043a97f876daea482aa0b43a"} Jan 31 05:39:11 crc kubenswrapper[5050]: I0131 05:39:11.186412 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l9flz"] Jan 31 05:39:11 crc kubenswrapper[5050]: I0131 05:39:11.262971 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-qdgd8"] Jan 31 05:39:11 crc kubenswrapper[5050]: W0131 05:39:11.492331 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54414e79_dc7f_4d64_a805_7d72846b9e28.slice/crio-3b08279112df938358c63cc1b4ecd06463c49b01dd2250fe5e497249f424563b WatchSource:0}: Error finding container 3b08279112df938358c63cc1b4ecd06463c49b01dd2250fe5e497249f424563b: Status 404 returned error can't find the container with id 3b08279112df938358c63cc1b4ecd06463c49b01dd2250fe5e497249f424563b Jan 31 05:39:11 crc kubenswrapper[5050]: W0131 05:39:11.494090 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod074ca3df_49a5_4075_ab96_377ea6feae84.slice/crio-a97d1e3616c011c7ead301a7ab2af1b96a64aa32d6ba8a9886aab55e141b7772 WatchSource:0}: Error finding container a97d1e3616c011c7ead301a7ab2af1b96a64aa32d6ba8a9886aab55e141b7772: Status 404 returned error can't find the container with id a97d1e3616c011c7ead301a7ab2af1b96a64aa32d6ba8a9886aab55e141b7772 Jan 31 05:39:11 crc kubenswrapper[5050]: I0131 05:39:11.754433 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" path="/var/lib/kubelet/pods/1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4/volumes" Jan 31 05:39:12 crc kubenswrapper[5050]: I0131 05:39:12.153412 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1","Type":"ContainerStarted","Data":"79b74b5f4874129f3338719c0d9d05dc7d82b30f856a5b5705bb80931dfffba6"} Jan 31 05:39:12 crc kubenswrapper[5050]: I0131 05:39:12.157485 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" event={"ID":"074ca3df-49a5-4075-ab96-377ea6feae84","Type":"ContainerStarted","Data":"a97d1e3616c011c7ead301a7ab2af1b96a64aa32d6ba8a9886aab55e141b7772"} Jan 31 05:39:12 crc kubenswrapper[5050]: I0131 05:39:12.159795 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" event={"ID":"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f","Type":"ContainerStarted","Data":"b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6"} Jan 31 05:39:12 crc kubenswrapper[5050]: I0131 05:39:12.159902 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" podUID="34761c5e-f79a-4fe3-93a9-6a1084dc3a0f" containerName="dnsmasq-dns" containerID="cri-o://b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6" gracePeriod=10 Jan 31 05:39:12 crc kubenswrapper[5050]: I0131 05:39:12.159927 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:39:12 crc kubenswrapper[5050]: I0131 05:39:12.162537 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"92f101f3-10e7-4e7f-a980-ce6a40e6e042","Type":"ContainerStarted","Data":"06297d5368dee19f45ba2b8428fe1cc1de853dad393745946becb83443c23a4d"} Jan 31 05:39:12 crc kubenswrapper[5050]: I0131 05:39:12.162645 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 31 05:39:12 crc kubenswrapper[5050]: I0131 05:39:12.163742 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" event={"ID":"54414e79-dc7f-4d64-a805-7d72846b9e28","Type":"ContainerStarted","Data":"3b08279112df938358c63cc1b4ecd06463c49b01dd2250fe5e497249f424563b"} Jan 31 05:39:12 crc kubenswrapper[5050]: I0131 05:39:12.194038 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.775502992 podStartE2EDuration="21.194022626s" podCreationTimestamp="2026-01-31 05:38:51 +0000 UTC" firstStartedPulling="2026-01-31 05:39:02.227003092 +0000 UTC m=+1067.276164688" lastFinishedPulling="2026-01-31 05:39:10.645522686 +0000 UTC m=+1075.694684322" observedRunningTime="2026-01-31 05:39:12.188649021 +0000 UTC m=+1077.237810627" watchObservedRunningTime="2026-01-31 05:39:12.194022626 +0000 UTC m=+1077.243184222" Jan 31 05:39:12 crc kubenswrapper[5050]: I0131 05:39:12.206686 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" podStartSLOduration=11.705860044 podStartE2EDuration="25.206664515s" podCreationTimestamp="2026-01-31 05:38:47 +0000 UTC" firstStartedPulling="2026-01-31 05:38:47.972119957 +0000 UTC m=+1053.021281553" lastFinishedPulling="2026-01-31 05:39:01.472924428 +0000 UTC m=+1066.522086024" observedRunningTime="2026-01-31 05:39:12.201335112 +0000 UTC m=+1077.250496708" watchObservedRunningTime="2026-01-31 05:39:12.206664515 +0000 UTC m=+1077.255826111" Jan 31 05:39:12 crc kubenswrapper[5050]: I0131 05:39:12.753904 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-887mr" podUID="1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: i/o timeout" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.171408 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"44932166-fbc5-41a4-bdf6-a3931dcbe9f0","Type":"ContainerStarted","Data":"bd43e4496a38160c7aab786f5176e7f09ec080a801e403171eb5e8187f03421c"} Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.178763 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49","Type":"ContainerStarted","Data":"295106a13e35d3f405ac29ff70859a0384cdc061f1e600ffddcf5824d651dd89"} Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.180583 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.183038 5050 generic.go:334] "Generic (PLEG): container finished" podID="23898a5e-f7c6-473b-a882-c91ed8ff2e06" containerID="c806223fd6d77605395c9d4634713936965b489696462540c1ecd9ece0f33b54" exitCode=0 Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.183105 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p2rnn" event={"ID":"23898a5e-f7c6-473b-a882-c91ed8ff2e06","Type":"ContainerDied","Data":"c806223fd6d77605395c9d4634713936965b489696462540c1ecd9ece0f33b54"} Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.188572 5050 generic.go:334] "Generic (PLEG): container finished" podID="074ca3df-49a5-4075-ab96-377ea6feae84" containerID="5a999813b46f4573cca649d8282c2461f6f7d93a57e957b286edfda6dd9b87c4" exitCode=0 Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.188718 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" event={"ID":"074ca3df-49a5-4075-ab96-377ea6feae84","Type":"ContainerDied","Data":"5a999813b46f4573cca649d8282c2461f6f7d93a57e957b286edfda6dd9b87c4"} Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.192255 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3fa70dc-40c9-4b8a-8239-d785f140d5d2","Type":"ContainerStarted","Data":"42ffb73fdfb7465785c2d4a666d37d70e31b4cc380a7d5f5bf700de51d819c7d"} Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.195424 5050 generic.go:334] "Generic (PLEG): container finished" podID="34761c5e-f79a-4fe3-93a9-6a1084dc3a0f" containerID="b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6" exitCode=0 Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.195475 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" event={"ID":"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f","Type":"ContainerDied","Data":"b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6"} Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.195494 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" event={"ID":"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f","Type":"ContainerDied","Data":"1a6e16685b1f1af2264bdb2cddcc0d44690bbfb5458d84e08d82e933963d6744"} Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.195501 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6d9q4" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.195510 5050 scope.go:117] "RemoveContainer" containerID="b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.205842 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9d6595e6-419a-4ade-8070-99a41d9c8204","Type":"ContainerStarted","Data":"a291fc5e5c6c59c3ee1160b31ec407ad022c884a4781273988ea6ba9455b039c"} Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.213879 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"faec33cd-ecd1-4244-abb0-c5a27441abd2","Type":"ContainerStarted","Data":"908370e323fbd20dcd8765438ac1ee820a6d0d5bbfe33c1d484ee9ff5821aa73"} Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.215837 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grlfx" event={"ID":"5eca93ff-3985-4e89-9254-a5d2a94793d6","Type":"ContainerStarted","Data":"65468ad53af8b3bc9b34b848d5dc3b68f9d0bf25143dc29cbbf525c80ac5ffb6"} Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.216171 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-grlfx" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.217079 5050 generic.go:334] "Generic (PLEG): container finished" podID="54414e79-dc7f-4d64-a805-7d72846b9e28" containerID="3bd1b86c7e22db09c20d1c2aab73b74044a80dfec50ba439aa6ee6f0e1a5fc99" exitCode=0 Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.217134 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" event={"ID":"54414e79-dc7f-4d64-a805-7d72846b9e28","Type":"ContainerDied","Data":"3bd1b86c7e22db09c20d1c2aab73b74044a80dfec50ba439aa6ee6f0e1a5fc99"} Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.223800 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0afb5f3d-b148-46fd-9867-071aafa5adff","Type":"ContainerStarted","Data":"196feacae87f155f9194935ad93031f3dc66d064da77e65a5f9c4293ace3b7af"} Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.223841 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.253651 5050 scope.go:117] "RemoveContainer" containerID="01e9eb7d0898ae275c35c0db843dedd39b55fd001e8b65e58f4bb110a1445ed1" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.296600 5050 scope.go:117] "RemoveContainer" containerID="b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6" Jan 31 05:39:13 crc kubenswrapper[5050]: E0131 05:39:13.298422 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6\": container with ID starting with b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6 not found: ID does not exist" containerID="b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.298549 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6"} err="failed to get container status \"b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6\": rpc error: code = NotFound desc = could not find container \"b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6\": container with ID starting with b004ab2d9568155eea4996803e08006a13bf9d4639e0e37d955c3aa10bb3b0d6 not found: ID does not exist" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.298645 5050 scope.go:117] "RemoveContainer" containerID="01e9eb7d0898ae275c35c0db843dedd39b55fd001e8b65e58f4bb110a1445ed1" Jan 31 05:39:13 crc kubenswrapper[5050]: E0131 05:39:13.299894 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e9eb7d0898ae275c35c0db843dedd39b55fd001e8b65e58f4bb110a1445ed1\": container with ID starting with 01e9eb7d0898ae275c35c0db843dedd39b55fd001e8b65e58f4bb110a1445ed1 not found: ID does not exist" containerID="01e9eb7d0898ae275c35c0db843dedd39b55fd001e8b65e58f4bb110a1445ed1" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.300073 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e9eb7d0898ae275c35c0db843dedd39b55fd001e8b65e58f4bb110a1445ed1"} err="failed to get container status \"01e9eb7d0898ae275c35c0db843dedd39b55fd001e8b65e58f4bb110a1445ed1\": rpc error: code = NotFound desc = could not find container \"01e9eb7d0898ae275c35c0db843dedd39b55fd001e8b65e58f4bb110a1445ed1\": container with ID starting with 01e9eb7d0898ae275c35c0db843dedd39b55fd001e8b65e58f4bb110a1445ed1 not found: ID does not exist" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.313654 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9bbn\" (UniqueName: \"kubernetes.io/projected/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-kube-api-access-h9bbn\") pod \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\" (UID: \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\") " Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.313982 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-config\") pod \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\" (UID: \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\") " Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.314054 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-dns-svc\") pod \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\" (UID: \"34761c5e-f79a-4fe3-93a9-6a1084dc3a0f\") " Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.320140 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-kube-api-access-h9bbn" (OuterVolumeSpecName: "kube-api-access-h9bbn") pod "34761c5e-f79a-4fe3-93a9-6a1084dc3a0f" (UID: "34761c5e-f79a-4fe3-93a9-6a1084dc3a0f"). InnerVolumeSpecName "kube-api-access-h9bbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.385507 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "34761c5e-f79a-4fe3-93a9-6a1084dc3a0f" (UID: "34761c5e-f79a-4fe3-93a9-6a1084dc3a0f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.390509 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-grlfx" podStartSLOduration=7.750748258 podStartE2EDuration="16.390486416s" podCreationTimestamp="2026-01-31 05:38:57 +0000 UTC" firstStartedPulling="2026-01-31 05:39:02.17779112 +0000 UTC m=+1067.226952716" lastFinishedPulling="2026-01-31 05:39:10.817529278 +0000 UTC m=+1075.866690874" observedRunningTime="2026-01-31 05:39:13.357546651 +0000 UTC m=+1078.406708247" watchObservedRunningTime="2026-01-31 05:39:13.390486416 +0000 UTC m=+1078.439648012" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.392223 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-config" (OuterVolumeSpecName: "config") pod "34761c5e-f79a-4fe3-93a9-6a1084dc3a0f" (UID: "34761c5e-f79a-4fe3-93a9-6a1084dc3a0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.406506 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.878976172 podStartE2EDuration="20.406479616s" podCreationTimestamp="2026-01-31 05:38:53 +0000 UTC" firstStartedPulling="2026-01-31 05:39:02.177776939 +0000 UTC m=+1067.226938535" lastFinishedPulling="2026-01-31 05:39:11.705280383 +0000 UTC m=+1076.754441979" observedRunningTime="2026-01-31 05:39:13.376657595 +0000 UTC m=+1078.425819201" watchObservedRunningTime="2026-01-31 05:39:13.406479616 +0000 UTC m=+1078.455641222" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.416157 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9bbn\" (UniqueName: \"kubernetes.io/projected/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-kube-api-access-h9bbn\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.416188 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.416197 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.536604 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6d9q4"] Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.541370 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6d9q4"] Jan 31 05:39:13 crc kubenswrapper[5050]: I0131 05:39:13.746844 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34761c5e-f79a-4fe3-93a9-6a1084dc3a0f" path="/var/lib/kubelet/pods/34761c5e-f79a-4fe3-93a9-6a1084dc3a0f/volumes" Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.238474 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" event={"ID":"074ca3df-49a5-4075-ab96-377ea6feae84","Type":"ContainerStarted","Data":"a99d17a1bd8e0b394a55c33ac92e0074a09c7e9e953679ecb27d51bd7ce83544"} Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.238874 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.251949 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" event={"ID":"54414e79-dc7f-4d64-a805-7d72846b9e28","Type":"ContainerStarted","Data":"41030a753632e14dce50e188330c336dd2a693ada774f99226792724fd2d089b"} Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.252280 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.256151 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"44932166-fbc5-41a4-bdf6-a3931dcbe9f0","Type":"ContainerStarted","Data":"289122ea62271db03d111106edd33b16a737f6ac6733b326ceb47323a21021da"} Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.261992 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7ddmz" event={"ID":"82b2b313-a37f-4405-a49a-456f3c88ceb3","Type":"ContainerStarted","Data":"5ce6928a093b38f5c351093a9602104ec7c7e5019a28b9b98ef1a9fbd870f1ac"} Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.268364 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0c3ec6f4-fbc1-40cd-bbcc-a3910770af49","Type":"ContainerStarted","Data":"c73ba36dba29cfc6a67bbf3f97003d9b9aa4c41392f980b25e1df5a131f00269"} Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.277675 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p2rnn" event={"ID":"23898a5e-f7c6-473b-a882-c91ed8ff2e06","Type":"ContainerStarted","Data":"fd93f2c95325727bfce5f6a5a40b9a76e90df36a1db0f04f65ee862e8afecba3"} Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.277767 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p2rnn" event={"ID":"23898a5e-f7c6-473b-a882-c91ed8ff2e06","Type":"ContainerStarted","Data":"baa04f87fa2159aa5ec3d6517b8a79a899122453363363046e8e8d96773d4005"} Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.291276 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" podStartSLOduration=10.29124493 podStartE2EDuration="10.29124493s" podCreationTimestamp="2026-01-31 05:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:39:14.27038747 +0000 UTC m=+1079.319549136" watchObservedRunningTime="2026-01-31 05:39:14.29124493 +0000 UTC m=+1079.340406556" Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.307981 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.614364252 podStartE2EDuration="17.307920018s" podCreationTimestamp="2026-01-31 05:38:57 +0000 UTC" firstStartedPulling="2026-01-31 05:39:02.352052252 +0000 UTC m=+1067.401213848" lastFinishedPulling="2026-01-31 05:39:13.045608018 +0000 UTC m=+1078.094769614" observedRunningTime="2026-01-31 05:39:14.296483581 +0000 UTC m=+1079.345645217" watchObservedRunningTime="2026-01-31 05:39:14.307920018 +0000 UTC m=+1079.357081644" Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.330283 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" podStartSLOduration=10.330259289 podStartE2EDuration="10.330259289s" podCreationTimestamp="2026-01-31 05:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:39:14.323102777 +0000 UTC m=+1079.372264383" watchObservedRunningTime="2026-01-31 05:39:14.330259289 +0000 UTC m=+1079.379420925" Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.353348 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.9047597849999995 podStartE2EDuration="16.353325378s" podCreationTimestamp="2026-01-31 05:38:58 +0000 UTC" firstStartedPulling="2026-01-31 05:39:02.438214978 +0000 UTC m=+1067.487376574" lastFinishedPulling="2026-01-31 05:39:12.886780571 +0000 UTC m=+1077.935942167" observedRunningTime="2026-01-31 05:39:14.347620135 +0000 UTC m=+1079.396781741" watchObservedRunningTime="2026-01-31 05:39:14.353325378 +0000 UTC m=+1079.402486994" Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.375666 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-7ddmz" podStartSLOduration=9.591682622 podStartE2EDuration="11.375641558s" podCreationTimestamp="2026-01-31 05:39:03 +0000 UTC" firstStartedPulling="2026-01-31 05:39:11.09856456 +0000 UTC m=+1076.147726146" lastFinishedPulling="2026-01-31 05:39:12.882523476 +0000 UTC m=+1077.931685082" observedRunningTime="2026-01-31 05:39:14.364358075 +0000 UTC m=+1079.413519711" watchObservedRunningTime="2026-01-31 05:39:14.375641558 +0000 UTC m=+1079.424803194" Jan 31 05:39:14 crc kubenswrapper[5050]: I0131 05:39:14.427424 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-p2rnn" podStartSLOduration=9.857235781 podStartE2EDuration="17.427399489s" podCreationTimestamp="2026-01-31 05:38:57 +0000 UTC" firstStartedPulling="2026-01-31 05:39:03.371223178 +0000 UTC m=+1068.420384774" lastFinishedPulling="2026-01-31 05:39:10.941386886 +0000 UTC m=+1075.990548482" observedRunningTime="2026-01-31 05:39:14.420445772 +0000 UTC m=+1079.469607378" watchObservedRunningTime="2026-01-31 05:39:14.427399489 +0000 UTC m=+1079.476561105" Jan 31 05:39:15 crc kubenswrapper[5050]: I0131 05:39:15.055233 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 31 05:39:15 crc kubenswrapper[5050]: I0131 05:39:15.055750 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 31 05:39:15 crc kubenswrapper[5050]: I0131 05:39:15.104899 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 31 05:39:15 crc kubenswrapper[5050]: I0131 05:39:15.285025 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:39:15 crc kubenswrapper[5050]: I0131 05:39:15.285198 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:39:16 crc kubenswrapper[5050]: I0131 05:39:16.656114 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 31 05:39:16 crc kubenswrapper[5050]: I0131 05:39:16.699522 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:16.722142 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:17.308226 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:17.377981 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:19.325219 5050 generic.go:334] "Generic (PLEG): container finished" podID="6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1" containerID="79b74b5f4874129f3338719c0d9d05dc7d82b30f856a5b5705bb80931dfffba6" exitCode=0 Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:19.325340 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1","Type":"ContainerDied","Data":"79b74b5f4874129f3338719c0d9d05dc7d82b30f856a5b5705bb80931dfffba6"} Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:19.429312 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:19.619411 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:19.678621 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l9flz"] Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.087599 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.269542 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 31 05:39:20 crc kubenswrapper[5050]: E0131 05:39:20.269917 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34761c5e-f79a-4fe3-93a9-6a1084dc3a0f" containerName="init" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.269934 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="34761c5e-f79a-4fe3-93a9-6a1084dc3a0f" containerName="init" Jan 31 05:39:20 crc kubenswrapper[5050]: E0131 05:39:20.270079 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" containerName="dnsmasq-dns" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.270114 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" containerName="dnsmasq-dns" Jan 31 05:39:20 crc kubenswrapper[5050]: E0131 05:39:20.270208 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34761c5e-f79a-4fe3-93a9-6a1084dc3a0f" containerName="dnsmasq-dns" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.270223 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="34761c5e-f79a-4fe3-93a9-6a1084dc3a0f" containerName="dnsmasq-dns" Jan 31 05:39:20 crc kubenswrapper[5050]: E0131 05:39:20.270249 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" containerName="init" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.270260 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" containerName="init" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.270627 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="34761c5e-f79a-4fe3-93a9-6a1084dc3a0f" containerName="dnsmasq-dns" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.270649 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea42cc2-c35c-46b1-adb3-9cc699bbd9b4" containerName="dnsmasq-dns" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.272066 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.274065 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-j5495" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.274321 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.274817 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.275367 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.293287 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.333599 5050 generic.go:334] "Generic (PLEG): container finished" podID="9d6595e6-419a-4ade-8070-99a41d9c8204" containerID="a291fc5e5c6c59c3ee1160b31ec407ad022c884a4781273988ea6ba9455b039c" exitCode=0 Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.333753 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9d6595e6-419a-4ade-8070-99a41d9c8204","Type":"ContainerDied","Data":"a291fc5e5c6c59c3ee1160b31ec407ad022c884a4781273988ea6ba9455b039c"} Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.335864 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" podUID="54414e79-dc7f-4d64-a805-7d72846b9e28" containerName="dnsmasq-dns" containerID="cri-o://41030a753632e14dce50e188330c336dd2a693ada774f99226792724fd2d089b" gracePeriod=10 Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.336064 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1","Type":"ContainerStarted","Data":"909d3a4de367f30d699b37f7af6ad4e8a9d25a5309bfc18b2f529f68e672750d"} Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.381087 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.697878334 podStartE2EDuration="32.3810524s" podCreationTimestamp="2026-01-31 05:38:48 +0000 UTC" firstStartedPulling="2026-01-31 05:39:01.961796335 +0000 UTC m=+1067.010957941" lastFinishedPulling="2026-01-31 05:39:10.644970391 +0000 UTC m=+1075.694132007" observedRunningTime="2026-01-31 05:39:20.373537898 +0000 UTC m=+1085.422699494" watchObservedRunningTime="2026-01-31 05:39:20.3810524 +0000 UTC m=+1085.430214016" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.435618 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b719fd5c-6f02-4b14-9807-8752304791e4-scripts\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.435662 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b719fd5c-6f02-4b14-9807-8752304791e4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.435687 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b719fd5c-6f02-4b14-9807-8752304791e4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.435908 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b719fd5c-6f02-4b14-9807-8752304791e4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.436081 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b719fd5c-6f02-4b14-9807-8752304791e4-config\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.436139 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b719fd5c-6f02-4b14-9807-8752304791e4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.438415 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t88fv\" (UniqueName: \"kubernetes.io/projected/b719fd5c-6f02-4b14-9807-8752304791e4-kube-api-access-t88fv\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.541514 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t88fv\" (UniqueName: \"kubernetes.io/projected/b719fd5c-6f02-4b14-9807-8752304791e4-kube-api-access-t88fv\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.541558 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b719fd5c-6f02-4b14-9807-8752304791e4-scripts\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.541583 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b719fd5c-6f02-4b14-9807-8752304791e4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.541610 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b719fd5c-6f02-4b14-9807-8752304791e4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.541665 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b719fd5c-6f02-4b14-9807-8752304791e4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.541729 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b719fd5c-6f02-4b14-9807-8752304791e4-config\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.541749 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b719fd5c-6f02-4b14-9807-8752304791e4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.542804 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b719fd5c-6f02-4b14-9807-8752304791e4-config\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.543190 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b719fd5c-6f02-4b14-9807-8752304791e4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.543291 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b719fd5c-6f02-4b14-9807-8752304791e4-scripts\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.545089 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b719fd5c-6f02-4b14-9807-8752304791e4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.545686 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b719fd5c-6f02-4b14-9807-8752304791e4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.546502 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b719fd5c-6f02-4b14-9807-8752304791e4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.560579 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t88fv\" (UniqueName: \"kubernetes.io/projected/b719fd5c-6f02-4b14-9807-8752304791e4-kube-api-access-t88fv\") pod \"ovn-northd-0\" (UID: \"b719fd5c-6f02-4b14-9807-8752304791e4\") " pod="openstack/ovn-northd-0" Jan 31 05:39:20 crc kubenswrapper[5050]: I0131 05:39:20.589512 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.031383 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 05:39:21 crc kubenswrapper[5050]: W0131 05:39:21.046387 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb719fd5c_6f02_4b14_9807_8752304791e4.slice/crio-69d80a68130f864998d8610ce6b878b717021dee9c487d5eb5d5a181fd55180a WatchSource:0}: Error finding container 69d80a68130f864998d8610ce6b878b717021dee9c487d5eb5d5a181fd55180a: Status 404 returned error can't find the container with id 69d80a68130f864998d8610ce6b878b717021dee9c487d5eb5d5a181fd55180a Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.393291 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b719fd5c-6f02-4b14-9807-8752304791e4","Type":"ContainerStarted","Data":"69d80a68130f864998d8610ce6b878b717021dee9c487d5eb5d5a181fd55180a"} Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.396516 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9d6595e6-419a-4ade-8070-99a41d9c8204","Type":"ContainerStarted","Data":"c05bcd246913f74eb8e19b5150dadfba4e933a40ad4c3ad183121b5a8fd3f964"} Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.399592 5050 generic.go:334] "Generic (PLEG): container finished" podID="54414e79-dc7f-4d64-a805-7d72846b9e28" containerID="41030a753632e14dce50e188330c336dd2a693ada774f99226792724fd2d089b" exitCode=0 Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.399634 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" event={"ID":"54414e79-dc7f-4d64-a805-7d72846b9e28","Type":"ContainerDied","Data":"41030a753632e14dce50e188330c336dd2a693ada774f99226792724fd2d089b"} Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.421800 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=22.465917103 podStartE2EDuration="31.421782245s" podCreationTimestamp="2026-01-31 05:38:50 +0000 UTC" firstStartedPulling="2026-01-31 05:39:01.858824289 +0000 UTC m=+1066.907985885" lastFinishedPulling="2026-01-31 05:39:10.814689431 +0000 UTC m=+1075.863851027" observedRunningTime="2026-01-31 05:39:21.415389803 +0000 UTC m=+1086.464551399" watchObservedRunningTime="2026-01-31 05:39:21.421782245 +0000 UTC m=+1086.470943841" Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.453202 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.453240 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.597844 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.664095 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-dns-svc\") pod \"54414e79-dc7f-4d64-a805-7d72846b9e28\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.664170 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfq5v\" (UniqueName: \"kubernetes.io/projected/54414e79-dc7f-4d64-a805-7d72846b9e28-kube-api-access-rfq5v\") pod \"54414e79-dc7f-4d64-a805-7d72846b9e28\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.664250 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-ovsdbserver-nb\") pod \"54414e79-dc7f-4d64-a805-7d72846b9e28\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.664310 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-config\") pod \"54414e79-dc7f-4d64-a805-7d72846b9e28\" (UID: \"54414e79-dc7f-4d64-a805-7d72846b9e28\") " Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.682160 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54414e79-dc7f-4d64-a805-7d72846b9e28-kube-api-access-rfq5v" (OuterVolumeSpecName: "kube-api-access-rfq5v") pod "54414e79-dc7f-4d64-a805-7d72846b9e28" (UID: "54414e79-dc7f-4d64-a805-7d72846b9e28"). InnerVolumeSpecName "kube-api-access-rfq5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.704881 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-config" (OuterVolumeSpecName: "config") pod "54414e79-dc7f-4d64-a805-7d72846b9e28" (UID: "54414e79-dc7f-4d64-a805-7d72846b9e28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.710528 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "54414e79-dc7f-4d64-a805-7d72846b9e28" (UID: "54414e79-dc7f-4d64-a805-7d72846b9e28"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.711934 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "54414e79-dc7f-4d64-a805-7d72846b9e28" (UID: "54414e79-dc7f-4d64-a805-7d72846b9e28"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.766151 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.766179 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.766188 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54414e79-dc7f-4d64-a805-7d72846b9e28-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:21 crc kubenswrapper[5050]: I0131 05:39:21.766197 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfq5v\" (UniqueName: \"kubernetes.io/projected/54414e79-dc7f-4d64-a805-7d72846b9e28-kube-api-access-rfq5v\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:22 crc kubenswrapper[5050]: I0131 05:39:22.411404 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" event={"ID":"54414e79-dc7f-4d64-a805-7d72846b9e28","Type":"ContainerDied","Data":"3b08279112df938358c63cc1b4ecd06463c49b01dd2250fe5e497249f424563b"} Jan 31 05:39:22 crc kubenswrapper[5050]: I0131 05:39:22.411461 5050 scope.go:117] "RemoveContainer" containerID="41030a753632e14dce50e188330c336dd2a693ada774f99226792724fd2d089b" Jan 31 05:39:22 crc kubenswrapper[5050]: I0131 05:39:22.411469 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l9flz" Jan 31 05:39:22 crc kubenswrapper[5050]: I0131 05:39:22.431157 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l9flz"] Jan 31 05:39:22 crc kubenswrapper[5050]: I0131 05:39:22.436859 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l9flz"] Jan 31 05:39:22 crc kubenswrapper[5050]: I0131 05:39:22.438354 5050 scope.go:117] "RemoveContainer" containerID="3bd1b86c7e22db09c20d1c2aab73b74044a80dfec50ba439aa6ee6f0e1a5fc99" Jan 31 05:39:23 crc kubenswrapper[5050]: I0131 05:39:23.602797 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 05:39:23 crc kubenswrapper[5050]: I0131 05:39:23.748404 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54414e79-dc7f-4d64-a805-7d72846b9e28" path="/var/lib/kubelet/pods/54414e79-dc7f-4d64-a805-7d72846b9e28/volumes" Jan 31 05:39:24 crc kubenswrapper[5050]: I0131 05:39:24.458615 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b719fd5c-6f02-4b14-9807-8752304791e4","Type":"ContainerStarted","Data":"0d3dbea99089f48c4663a45dcb23d2cfcbc0b5ed3feb3e930e07d0e7b62754e2"} Jan 31 05:39:24 crc kubenswrapper[5050]: I0131 05:39:24.458670 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b719fd5c-6f02-4b14-9807-8752304791e4","Type":"ContainerStarted","Data":"3414c97661d9a9f51033c09ba7bc35d2b7ff52ae293e695223312c60d052d8b7"} Jan 31 05:39:24 crc kubenswrapper[5050]: I0131 05:39:24.458825 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 31 05:39:24 crc kubenswrapper[5050]: I0131 05:39:24.503631 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.330295479 podStartE2EDuration="4.503607718s" podCreationTimestamp="2026-01-31 05:39:20 +0000 UTC" firstStartedPulling="2026-01-31 05:39:21.048265638 +0000 UTC m=+1086.097427234" lastFinishedPulling="2026-01-31 05:39:23.221577887 +0000 UTC m=+1088.270739473" observedRunningTime="2026-01-31 05:39:24.498812008 +0000 UTC m=+1089.547973604" watchObservedRunningTime="2026-01-31 05:39:24.503607718 +0000 UTC m=+1089.552769314" Jan 31 05:39:26 crc kubenswrapper[5050]: I0131 05:39:26.229759 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 31 05:39:26 crc kubenswrapper[5050]: I0131 05:39:26.345095 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="9d6595e6-419a-4ade-8070-99a41d9c8204" containerName="galera" probeResult="failure" output=< Jan 31 05:39:26 crc kubenswrapper[5050]: wsrep_local_state_comment (Joined) differs from Synced Jan 31 05:39:26 crc kubenswrapper[5050]: > Jan 31 05:39:30 crc kubenswrapper[5050]: I0131 05:39:30.109454 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 31 05:39:30 crc kubenswrapper[5050]: I0131 05:39:30.109636 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 31 05:39:30 crc kubenswrapper[5050]: I0131 05:39:30.200769 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 31 05:39:30 crc kubenswrapper[5050]: I0131 05:39:30.591153 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.346285 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-45f4-account-create-update-dnhlv"] Jan 31 05:39:31 crc kubenswrapper[5050]: E0131 05:39:31.348260 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54414e79-dc7f-4d64-a805-7d72846b9e28" containerName="dnsmasq-dns" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.348394 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="54414e79-dc7f-4d64-a805-7d72846b9e28" containerName="dnsmasq-dns" Jan 31 05:39:31 crc kubenswrapper[5050]: E0131 05:39:31.348513 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54414e79-dc7f-4d64-a805-7d72846b9e28" containerName="init" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.348601 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="54414e79-dc7f-4d64-a805-7d72846b9e28" containerName="init" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.348861 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="54414e79-dc7f-4d64-a805-7d72846b9e28" containerName="dnsmasq-dns" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.349582 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-45f4-account-create-update-dnhlv" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.352129 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.352925 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-45f4-account-create-update-dnhlv"] Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.394066 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qj56b"] Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.395061 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qj56b" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.401837 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qj56b"] Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.421250 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d403f46d-7461-4c02-8788-f0d4fc1039eb-operator-scripts\") pod \"keystone-45f4-account-create-update-dnhlv\" (UID: \"d403f46d-7461-4c02-8788-f0d4fc1039eb\") " pod="openstack/keystone-45f4-account-create-update-dnhlv" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.421375 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt9bt\" (UniqueName: \"kubernetes.io/projected/d403f46d-7461-4c02-8788-f0d4fc1039eb-kube-api-access-nt9bt\") pod \"keystone-45f4-account-create-update-dnhlv\" (UID: \"d403f46d-7461-4c02-8788-f0d4fc1039eb\") " pod="openstack/keystone-45f4-account-create-update-dnhlv" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.523258 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt9bt\" (UniqueName: \"kubernetes.io/projected/d403f46d-7461-4c02-8788-f0d4fc1039eb-kube-api-access-nt9bt\") pod \"keystone-45f4-account-create-update-dnhlv\" (UID: \"d403f46d-7461-4c02-8788-f0d4fc1039eb\") " pod="openstack/keystone-45f4-account-create-update-dnhlv" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.523662 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d403f46d-7461-4c02-8788-f0d4fc1039eb-operator-scripts\") pod \"keystone-45f4-account-create-update-dnhlv\" (UID: \"d403f46d-7461-4c02-8788-f0d4fc1039eb\") " pod="openstack/keystone-45f4-account-create-update-dnhlv" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.523794 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9664617a-0182-491b-b8e4-dc8f49991888-operator-scripts\") pod \"keystone-db-create-qj56b\" (UID: \"9664617a-0182-491b-b8e4-dc8f49991888\") " pod="openstack/keystone-db-create-qj56b" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.523928 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgc7m\" (UniqueName: \"kubernetes.io/projected/9664617a-0182-491b-b8e4-dc8f49991888-kube-api-access-cgc7m\") pod \"keystone-db-create-qj56b\" (UID: \"9664617a-0182-491b-b8e4-dc8f49991888\") " pod="openstack/keystone-db-create-qj56b" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.524509 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d403f46d-7461-4c02-8788-f0d4fc1039eb-operator-scripts\") pod \"keystone-45f4-account-create-update-dnhlv\" (UID: \"d403f46d-7461-4c02-8788-f0d4fc1039eb\") " pod="openstack/keystone-45f4-account-create-update-dnhlv" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.541672 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt9bt\" (UniqueName: \"kubernetes.io/projected/d403f46d-7461-4c02-8788-f0d4fc1039eb-kube-api-access-nt9bt\") pod \"keystone-45f4-account-create-update-dnhlv\" (UID: \"d403f46d-7461-4c02-8788-f0d4fc1039eb\") " pod="openstack/keystone-45f4-account-create-update-dnhlv" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.546476 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.628177 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgc7m\" (UniqueName: \"kubernetes.io/projected/9664617a-0182-491b-b8e4-dc8f49991888-kube-api-access-cgc7m\") pod \"keystone-db-create-qj56b\" (UID: \"9664617a-0182-491b-b8e4-dc8f49991888\") " pod="openstack/keystone-db-create-qj56b" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.628797 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9664617a-0182-491b-b8e4-dc8f49991888-operator-scripts\") pod \"keystone-db-create-qj56b\" (UID: \"9664617a-0182-491b-b8e4-dc8f49991888\") " pod="openstack/keystone-db-create-qj56b" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.633110 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9664617a-0182-491b-b8e4-dc8f49991888-operator-scripts\") pod \"keystone-db-create-qj56b\" (UID: \"9664617a-0182-491b-b8e4-dc8f49991888\") " pod="openstack/keystone-db-create-qj56b" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.648311 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgc7m\" (UniqueName: \"kubernetes.io/projected/9664617a-0182-491b-b8e4-dc8f49991888-kube-api-access-cgc7m\") pod \"keystone-db-create-qj56b\" (UID: \"9664617a-0182-491b-b8e4-dc8f49991888\") " pod="openstack/keystone-db-create-qj56b" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.698516 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-45f4-account-create-update-dnhlv" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.707362 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-pnbwj"] Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.707843 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qj56b" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.708350 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pnbwj" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.717152 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b6a7-account-create-update-xrks7"] Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.721758 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b6a7-account-create-update-xrks7" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.725583 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.749919 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pnbwj"] Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.758070 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b6a7-account-create-update-xrks7"] Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.836422 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzvdh\" (UniqueName: \"kubernetes.io/projected/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7-kube-api-access-fzvdh\") pod \"placement-db-create-pnbwj\" (UID: \"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7\") " pod="openstack/placement-db-create-pnbwj" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.836509 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ldcr\" (UniqueName: \"kubernetes.io/projected/bdee2500-7092-4156-998b-46952a2ba2d7-kube-api-access-4ldcr\") pod \"placement-b6a7-account-create-update-xrks7\" (UID: \"bdee2500-7092-4156-998b-46952a2ba2d7\") " pod="openstack/placement-b6a7-account-create-update-xrks7" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.836565 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7-operator-scripts\") pod \"placement-db-create-pnbwj\" (UID: \"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7\") " pod="openstack/placement-db-create-pnbwj" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.836608 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdee2500-7092-4156-998b-46952a2ba2d7-operator-scripts\") pod \"placement-b6a7-account-create-update-xrks7\" (UID: \"bdee2500-7092-4156-998b-46952a2ba2d7\") " pod="openstack/placement-b6a7-account-create-update-xrks7" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.938139 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ldcr\" (UniqueName: \"kubernetes.io/projected/bdee2500-7092-4156-998b-46952a2ba2d7-kube-api-access-4ldcr\") pod \"placement-b6a7-account-create-update-xrks7\" (UID: \"bdee2500-7092-4156-998b-46952a2ba2d7\") " pod="openstack/placement-b6a7-account-create-update-xrks7" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.938482 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7-operator-scripts\") pod \"placement-db-create-pnbwj\" (UID: \"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7\") " pod="openstack/placement-db-create-pnbwj" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.938580 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdee2500-7092-4156-998b-46952a2ba2d7-operator-scripts\") pod \"placement-b6a7-account-create-update-xrks7\" (UID: \"bdee2500-7092-4156-998b-46952a2ba2d7\") " pod="openstack/placement-b6a7-account-create-update-xrks7" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.938619 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzvdh\" (UniqueName: \"kubernetes.io/projected/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7-kube-api-access-fzvdh\") pod \"placement-db-create-pnbwj\" (UID: \"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7\") " pod="openstack/placement-db-create-pnbwj" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.939524 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdee2500-7092-4156-998b-46952a2ba2d7-operator-scripts\") pod \"placement-b6a7-account-create-update-xrks7\" (UID: \"bdee2500-7092-4156-998b-46952a2ba2d7\") " pod="openstack/placement-b6a7-account-create-update-xrks7" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.939672 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7-operator-scripts\") pod \"placement-db-create-pnbwj\" (UID: \"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7\") " pod="openstack/placement-db-create-pnbwj" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.957220 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzvdh\" (UniqueName: \"kubernetes.io/projected/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7-kube-api-access-fzvdh\") pod \"placement-db-create-pnbwj\" (UID: \"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7\") " pod="openstack/placement-db-create-pnbwj" Jan 31 05:39:31 crc kubenswrapper[5050]: I0131 05:39:31.962798 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ldcr\" (UniqueName: \"kubernetes.io/projected/bdee2500-7092-4156-998b-46952a2ba2d7-kube-api-access-4ldcr\") pod \"placement-b6a7-account-create-update-xrks7\" (UID: \"bdee2500-7092-4156-998b-46952a2ba2d7\") " pod="openstack/placement-b6a7-account-create-update-xrks7" Jan 31 05:39:32 crc kubenswrapper[5050]: I0131 05:39:32.108102 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pnbwj" Jan 31 05:39:32 crc kubenswrapper[5050]: I0131 05:39:32.119722 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b6a7-account-create-update-xrks7" Jan 31 05:39:32 crc kubenswrapper[5050]: I0131 05:39:32.173228 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qj56b"] Jan 31 05:39:32 crc kubenswrapper[5050]: W0131 05:39:32.182025 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9664617a_0182_491b_b8e4_dc8f49991888.slice/crio-a395fddc5e8c93aab223d7c4ea7703dfd3fc73c984299a7f1741de11c5083fe9 WatchSource:0}: Error finding container a395fddc5e8c93aab223d7c4ea7703dfd3fc73c984299a7f1741de11c5083fe9: Status 404 returned error can't find the container with id a395fddc5e8c93aab223d7c4ea7703dfd3fc73c984299a7f1741de11c5083fe9 Jan 31 05:39:32 crc kubenswrapper[5050]: I0131 05:39:32.271214 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-45f4-account-create-update-dnhlv"] Jan 31 05:39:32 crc kubenswrapper[5050]: I0131 05:39:32.521735 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-45f4-account-create-update-dnhlv" event={"ID":"d403f46d-7461-4c02-8788-f0d4fc1039eb","Type":"ContainerStarted","Data":"c91c3442b509e8c8bc15f846eb7d1b3db9f875d2949b1a849f0732c35c46c3fb"} Jan 31 05:39:32 crc kubenswrapper[5050]: I0131 05:39:32.522114 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-45f4-account-create-update-dnhlv" event={"ID":"d403f46d-7461-4c02-8788-f0d4fc1039eb","Type":"ContainerStarted","Data":"0d2f83996fe1d9f9348a3014477a8e1eb6f408eabc86f28454971cad4c3ec5bc"} Jan 31 05:39:32 crc kubenswrapper[5050]: I0131 05:39:32.524177 5050 generic.go:334] "Generic (PLEG): container finished" podID="9664617a-0182-491b-b8e4-dc8f49991888" containerID="826dd6116d19a9f79122376e576b137f40289e2c61b367a14f9b3d42ca9f7ae9" exitCode=0 Jan 31 05:39:32 crc kubenswrapper[5050]: I0131 05:39:32.524499 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qj56b" event={"ID":"9664617a-0182-491b-b8e4-dc8f49991888","Type":"ContainerDied","Data":"826dd6116d19a9f79122376e576b137f40289e2c61b367a14f9b3d42ca9f7ae9"} Jan 31 05:39:32 crc kubenswrapper[5050]: I0131 05:39:32.524543 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qj56b" event={"ID":"9664617a-0182-491b-b8e4-dc8f49991888","Type":"ContainerStarted","Data":"a395fddc5e8c93aab223d7c4ea7703dfd3fc73c984299a7f1741de11c5083fe9"} Jan 31 05:39:32 crc kubenswrapper[5050]: I0131 05:39:32.544491 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-45f4-account-create-update-dnhlv" podStartSLOduration=1.544476325 podStartE2EDuration="1.544476325s" podCreationTimestamp="2026-01-31 05:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:39:32.53800046 +0000 UTC m=+1097.587162056" watchObservedRunningTime="2026-01-31 05:39:32.544476325 +0000 UTC m=+1097.593637921" Jan 31 05:39:32 crc kubenswrapper[5050]: I0131 05:39:32.578330 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pnbwj"] Jan 31 05:39:32 crc kubenswrapper[5050]: W0131 05:39:32.580224 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod355e9317_6a93_47e5_83c5_1c5eb6a4d9a7.slice/crio-bbbe22901dbea2d74499e15309f199b9eb83bbcc2738405ef0bbbfada3450c4c WatchSource:0}: Error finding container bbbe22901dbea2d74499e15309f199b9eb83bbcc2738405ef0bbbfada3450c4c: Status 404 returned error can't find the container with id bbbe22901dbea2d74499e15309f199b9eb83bbcc2738405ef0bbbfada3450c4c Jan 31 05:39:32 crc kubenswrapper[5050]: I0131 05:39:32.631294 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b6a7-account-create-update-xrks7"] Jan 31 05:39:32 crc kubenswrapper[5050]: W0131 05:39:32.636244 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdee2500_7092_4156_998b_46952a2ba2d7.slice/crio-6cba66adb71686dd0397ad8ff94aaff995a83e88005fa82e42b9a23ac983eb6d WatchSource:0}: Error finding container 6cba66adb71686dd0397ad8ff94aaff995a83e88005fa82e42b9a23ac983eb6d: Status 404 returned error can't find the container with id 6cba66adb71686dd0397ad8ff94aaff995a83e88005fa82e42b9a23ac983eb6d Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.536231 5050 generic.go:334] "Generic (PLEG): container finished" podID="355e9317-6a93-47e5-83c5-1c5eb6a4d9a7" containerID="48d5738832e2c7daea91d17c0d359a95473297b7633bedf9f38ad57d5e563b54" exitCode=0 Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.536333 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pnbwj" event={"ID":"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7","Type":"ContainerDied","Data":"48d5738832e2c7daea91d17c0d359a95473297b7633bedf9f38ad57d5e563b54"} Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.536563 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pnbwj" event={"ID":"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7","Type":"ContainerStarted","Data":"bbbe22901dbea2d74499e15309f199b9eb83bbcc2738405ef0bbbfada3450c4c"} Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.539427 5050 generic.go:334] "Generic (PLEG): container finished" podID="d403f46d-7461-4c02-8788-f0d4fc1039eb" containerID="c91c3442b509e8c8bc15f846eb7d1b3db9f875d2949b1a849f0732c35c46c3fb" exitCode=0 Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.539497 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-45f4-account-create-update-dnhlv" event={"ID":"d403f46d-7461-4c02-8788-f0d4fc1039eb","Type":"ContainerDied","Data":"c91c3442b509e8c8bc15f846eb7d1b3db9f875d2949b1a849f0732c35c46c3fb"} Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.541184 5050 generic.go:334] "Generic (PLEG): container finished" podID="bdee2500-7092-4156-998b-46952a2ba2d7" containerID="72779be3c8faf9a173d6771bbc40ff0fcc81153a7f765779ae504affd9b7a3ab" exitCode=0 Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.541254 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b6a7-account-create-update-xrks7" event={"ID":"bdee2500-7092-4156-998b-46952a2ba2d7","Type":"ContainerDied","Data":"72779be3c8faf9a173d6771bbc40ff0fcc81153a7f765779ae504affd9b7a3ab"} Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.541272 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b6a7-account-create-update-xrks7" event={"ID":"bdee2500-7092-4156-998b-46952a2ba2d7","Type":"ContainerStarted","Data":"6cba66adb71686dd0397ad8ff94aaff995a83e88005fa82e42b9a23ac983eb6d"} Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.885430 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qj56b" Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.967694 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgc7m\" (UniqueName: \"kubernetes.io/projected/9664617a-0182-491b-b8e4-dc8f49991888-kube-api-access-cgc7m\") pod \"9664617a-0182-491b-b8e4-dc8f49991888\" (UID: \"9664617a-0182-491b-b8e4-dc8f49991888\") " Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.967857 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9664617a-0182-491b-b8e4-dc8f49991888-operator-scripts\") pod \"9664617a-0182-491b-b8e4-dc8f49991888\" (UID: \"9664617a-0182-491b-b8e4-dc8f49991888\") " Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.968548 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9664617a-0182-491b-b8e4-dc8f49991888-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9664617a-0182-491b-b8e4-dc8f49991888" (UID: "9664617a-0182-491b-b8e4-dc8f49991888"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:33 crc kubenswrapper[5050]: I0131 05:39:33.974061 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9664617a-0182-491b-b8e4-dc8f49991888-kube-api-access-cgc7m" (OuterVolumeSpecName: "kube-api-access-cgc7m") pod "9664617a-0182-491b-b8e4-dc8f49991888" (UID: "9664617a-0182-491b-b8e4-dc8f49991888"). InnerVolumeSpecName "kube-api-access-cgc7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:34 crc kubenswrapper[5050]: I0131 05:39:34.069629 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9664617a-0182-491b-b8e4-dc8f49991888-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:34 crc kubenswrapper[5050]: I0131 05:39:34.069683 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgc7m\" (UniqueName: \"kubernetes.io/projected/9664617a-0182-491b-b8e4-dc8f49991888-kube-api-access-cgc7m\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:34 crc kubenswrapper[5050]: I0131 05:39:34.552680 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qj56b" event={"ID":"9664617a-0182-491b-b8e4-dc8f49991888","Type":"ContainerDied","Data":"a395fddc5e8c93aab223d7c4ea7703dfd3fc73c984299a7f1741de11c5083fe9"} Jan 31 05:39:34 crc kubenswrapper[5050]: I0131 05:39:34.557376 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a395fddc5e8c93aab223d7c4ea7703dfd3fc73c984299a7f1741de11c5083fe9" Jan 31 05:39:34 crc kubenswrapper[5050]: I0131 05:39:34.552690 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qj56b" Jan 31 05:39:34 crc kubenswrapper[5050]: I0131 05:39:34.932337 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b6a7-account-create-update-xrks7" Jan 31 05:39:34 crc kubenswrapper[5050]: I0131 05:39:34.985482 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ldcr\" (UniqueName: \"kubernetes.io/projected/bdee2500-7092-4156-998b-46952a2ba2d7-kube-api-access-4ldcr\") pod \"bdee2500-7092-4156-998b-46952a2ba2d7\" (UID: \"bdee2500-7092-4156-998b-46952a2ba2d7\") " Jan 31 05:39:34 crc kubenswrapper[5050]: I0131 05:39:34.985578 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdee2500-7092-4156-998b-46952a2ba2d7-operator-scripts\") pod \"bdee2500-7092-4156-998b-46952a2ba2d7\" (UID: \"bdee2500-7092-4156-998b-46952a2ba2d7\") " Jan 31 05:39:34 crc kubenswrapper[5050]: I0131 05:39:34.986272 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdee2500-7092-4156-998b-46952a2ba2d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdee2500-7092-4156-998b-46952a2ba2d7" (UID: "bdee2500-7092-4156-998b-46952a2ba2d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:34 crc kubenswrapper[5050]: I0131 05:39:34.991306 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdee2500-7092-4156-998b-46952a2ba2d7-kube-api-access-4ldcr" (OuterVolumeSpecName: "kube-api-access-4ldcr") pod "bdee2500-7092-4156-998b-46952a2ba2d7" (UID: "bdee2500-7092-4156-998b-46952a2ba2d7"). InnerVolumeSpecName "kube-api-access-4ldcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.031834 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-45f4-account-create-update-dnhlv" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.086822 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt9bt\" (UniqueName: \"kubernetes.io/projected/d403f46d-7461-4c02-8788-f0d4fc1039eb-kube-api-access-nt9bt\") pod \"d403f46d-7461-4c02-8788-f0d4fc1039eb\" (UID: \"d403f46d-7461-4c02-8788-f0d4fc1039eb\") " Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.087071 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d403f46d-7461-4c02-8788-f0d4fc1039eb-operator-scripts\") pod \"d403f46d-7461-4c02-8788-f0d4fc1039eb\" (UID: \"d403f46d-7461-4c02-8788-f0d4fc1039eb\") " Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.087533 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d403f46d-7461-4c02-8788-f0d4fc1039eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d403f46d-7461-4c02-8788-f0d4fc1039eb" (UID: "d403f46d-7461-4c02-8788-f0d4fc1039eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.087656 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ldcr\" (UniqueName: \"kubernetes.io/projected/bdee2500-7092-4156-998b-46952a2ba2d7-kube-api-access-4ldcr\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.087682 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdee2500-7092-4156-998b-46952a2ba2d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.088735 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pnbwj" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.092740 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d403f46d-7461-4c02-8788-f0d4fc1039eb-kube-api-access-nt9bt" (OuterVolumeSpecName: "kube-api-access-nt9bt") pod "d403f46d-7461-4c02-8788-f0d4fc1039eb" (UID: "d403f46d-7461-4c02-8788-f0d4fc1039eb"). InnerVolumeSpecName "kube-api-access-nt9bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.188733 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzvdh\" (UniqueName: \"kubernetes.io/projected/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7-kube-api-access-fzvdh\") pod \"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7\" (UID: \"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7\") " Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.188934 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7-operator-scripts\") pod \"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7\" (UID: \"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7\") " Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.189625 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d403f46d-7461-4c02-8788-f0d4fc1039eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.189659 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt9bt\" (UniqueName: \"kubernetes.io/projected/d403f46d-7461-4c02-8788-f0d4fc1039eb-kube-api-access-nt9bt\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.189778 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "355e9317-6a93-47e5-83c5-1c5eb6a4d9a7" (UID: "355e9317-6a93-47e5-83c5-1c5eb6a4d9a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.191612 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7-kube-api-access-fzvdh" (OuterVolumeSpecName: "kube-api-access-fzvdh") pod "355e9317-6a93-47e5-83c5-1c5eb6a4d9a7" (UID: "355e9317-6a93-47e5-83c5-1c5eb6a4d9a7"). InnerVolumeSpecName "kube-api-access-fzvdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.291087 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzvdh\" (UniqueName: \"kubernetes.io/projected/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7-kube-api-access-fzvdh\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.291132 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.563107 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pnbwj" event={"ID":"355e9317-6a93-47e5-83c5-1c5eb6a4d9a7","Type":"ContainerDied","Data":"bbbe22901dbea2d74499e15309f199b9eb83bbcc2738405ef0bbbfada3450c4c"} Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.563139 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pnbwj" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.563165 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbbe22901dbea2d74499e15309f199b9eb83bbcc2738405ef0bbbfada3450c4c" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.564750 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-45f4-account-create-update-dnhlv" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.564749 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-45f4-account-create-update-dnhlv" event={"ID":"d403f46d-7461-4c02-8788-f0d4fc1039eb","Type":"ContainerDied","Data":"0d2f83996fe1d9f9348a3014477a8e1eb6f408eabc86f28454971cad4c3ec5bc"} Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.564805 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d2f83996fe1d9f9348a3014477a8e1eb6f408eabc86f28454971cad4c3ec5bc" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.566774 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b6a7-account-create-update-xrks7" event={"ID":"bdee2500-7092-4156-998b-46952a2ba2d7","Type":"ContainerDied","Data":"6cba66adb71686dd0397ad8ff94aaff995a83e88005fa82e42b9a23ac983eb6d"} Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.566820 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cba66adb71686dd0397ad8ff94aaff995a83e88005fa82e42b9a23ac983eb6d" Jan 31 05:39:35 crc kubenswrapper[5050]: I0131 05:39:35.566853 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b6a7-account-create-update-xrks7" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.868144 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-srmqm"] Jan 31 05:39:36 crc kubenswrapper[5050]: E0131 05:39:36.868587 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9664617a-0182-491b-b8e4-dc8f49991888" containerName="mariadb-database-create" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.868608 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="9664617a-0182-491b-b8e4-dc8f49991888" containerName="mariadb-database-create" Jan 31 05:39:36 crc kubenswrapper[5050]: E0131 05:39:36.868656 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdee2500-7092-4156-998b-46952a2ba2d7" containerName="mariadb-account-create-update" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.868669 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdee2500-7092-4156-998b-46952a2ba2d7" containerName="mariadb-account-create-update" Jan 31 05:39:36 crc kubenswrapper[5050]: E0131 05:39:36.868695 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d403f46d-7461-4c02-8788-f0d4fc1039eb" containerName="mariadb-account-create-update" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.868707 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d403f46d-7461-4c02-8788-f0d4fc1039eb" containerName="mariadb-account-create-update" Jan 31 05:39:36 crc kubenswrapper[5050]: E0131 05:39:36.868772 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355e9317-6a93-47e5-83c5-1c5eb6a4d9a7" containerName="mariadb-database-create" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.868788 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="355e9317-6a93-47e5-83c5-1c5eb6a4d9a7" containerName="mariadb-database-create" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.869231 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d403f46d-7461-4c02-8788-f0d4fc1039eb" containerName="mariadb-account-create-update" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.869255 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="355e9317-6a93-47e5-83c5-1c5eb6a4d9a7" containerName="mariadb-database-create" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.869270 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="9664617a-0182-491b-b8e4-dc8f49991888" containerName="mariadb-database-create" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.869284 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdee2500-7092-4156-998b-46952a2ba2d7" containerName="mariadb-account-create-update" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.870090 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-srmqm" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.875533 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-srmqm"] Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.921536 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49ce1909-4188-4297-ba99-660320bdef11-operator-scripts\") pod \"glance-db-create-srmqm\" (UID: \"49ce1909-4188-4297-ba99-660320bdef11\") " pod="openstack/glance-db-create-srmqm" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.922692 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjwf7\" (UniqueName: \"kubernetes.io/projected/49ce1909-4188-4297-ba99-660320bdef11-kube-api-access-pjwf7\") pod \"glance-db-create-srmqm\" (UID: \"49ce1909-4188-4297-ba99-660320bdef11\") " pod="openstack/glance-db-create-srmqm" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.994997 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1bdf-account-create-update-tqmmv"] Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.996447 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1bdf-account-create-update-tqmmv" Jan 31 05:39:36 crc kubenswrapper[5050]: I0131 05:39:36.999385 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.006321 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1bdf-account-create-update-tqmmv"] Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.024970 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjwf7\" (UniqueName: \"kubernetes.io/projected/49ce1909-4188-4297-ba99-660320bdef11-kube-api-access-pjwf7\") pod \"glance-db-create-srmqm\" (UID: \"49ce1909-4188-4297-ba99-660320bdef11\") " pod="openstack/glance-db-create-srmqm" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.025329 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drq74\" (UniqueName: \"kubernetes.io/projected/921e218b-b6a2-47ff-99e0-1b5199015acf-kube-api-access-drq74\") pod \"glance-1bdf-account-create-update-tqmmv\" (UID: \"921e218b-b6a2-47ff-99e0-1b5199015acf\") " pod="openstack/glance-1bdf-account-create-update-tqmmv" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.025560 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/921e218b-b6a2-47ff-99e0-1b5199015acf-operator-scripts\") pod \"glance-1bdf-account-create-update-tqmmv\" (UID: \"921e218b-b6a2-47ff-99e0-1b5199015acf\") " pod="openstack/glance-1bdf-account-create-update-tqmmv" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.025765 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49ce1909-4188-4297-ba99-660320bdef11-operator-scripts\") pod \"glance-db-create-srmqm\" (UID: \"49ce1909-4188-4297-ba99-660320bdef11\") " pod="openstack/glance-db-create-srmqm" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.026704 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49ce1909-4188-4297-ba99-660320bdef11-operator-scripts\") pod \"glance-db-create-srmqm\" (UID: \"49ce1909-4188-4297-ba99-660320bdef11\") " pod="openstack/glance-db-create-srmqm" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.042586 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjwf7\" (UniqueName: \"kubernetes.io/projected/49ce1909-4188-4297-ba99-660320bdef11-kube-api-access-pjwf7\") pod \"glance-db-create-srmqm\" (UID: \"49ce1909-4188-4297-ba99-660320bdef11\") " pod="openstack/glance-db-create-srmqm" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.127086 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drq74\" (UniqueName: \"kubernetes.io/projected/921e218b-b6a2-47ff-99e0-1b5199015acf-kube-api-access-drq74\") pod \"glance-1bdf-account-create-update-tqmmv\" (UID: \"921e218b-b6a2-47ff-99e0-1b5199015acf\") " pod="openstack/glance-1bdf-account-create-update-tqmmv" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.127145 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/921e218b-b6a2-47ff-99e0-1b5199015acf-operator-scripts\") pod \"glance-1bdf-account-create-update-tqmmv\" (UID: \"921e218b-b6a2-47ff-99e0-1b5199015acf\") " pod="openstack/glance-1bdf-account-create-update-tqmmv" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.128029 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/921e218b-b6a2-47ff-99e0-1b5199015acf-operator-scripts\") pod \"glance-1bdf-account-create-update-tqmmv\" (UID: \"921e218b-b6a2-47ff-99e0-1b5199015acf\") " pod="openstack/glance-1bdf-account-create-update-tqmmv" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.143558 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drq74\" (UniqueName: \"kubernetes.io/projected/921e218b-b6a2-47ff-99e0-1b5199015acf-kube-api-access-drq74\") pod \"glance-1bdf-account-create-update-tqmmv\" (UID: \"921e218b-b6a2-47ff-99e0-1b5199015acf\") " pod="openstack/glance-1bdf-account-create-update-tqmmv" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.222632 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-srmqm" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.316315 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1bdf-account-create-update-tqmmv" Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.796675 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-srmqm"] Jan 31 05:39:37 crc kubenswrapper[5050]: W0131 05:39:37.797054 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49ce1909_4188_4297_ba99_660320bdef11.slice/crio-08b8c120ad7341dd95503ab111452657dde418c794d98bd47262ca549affcaf3 WatchSource:0}: Error finding container 08b8c120ad7341dd95503ab111452657dde418c794d98bd47262ca549affcaf3: Status 404 returned error can't find the container with id 08b8c120ad7341dd95503ab111452657dde418c794d98bd47262ca549affcaf3 Jan 31 05:39:37 crc kubenswrapper[5050]: I0131 05:39:37.863048 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1bdf-account-create-update-tqmmv"] Jan 31 05:39:37 crc kubenswrapper[5050]: W0131 05:39:37.871437 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod921e218b_b6a2_47ff_99e0_1b5199015acf.slice/crio-ffc57be78ce29330a18ec44be8b07d0c00ca3f14903e7e5752f0ecbda78db37b WatchSource:0}: Error finding container ffc57be78ce29330a18ec44be8b07d0c00ca3f14903e7e5752f0ecbda78db37b: Status 404 returned error can't find the container with id ffc57be78ce29330a18ec44be8b07d0c00ca3f14903e7e5752f0ecbda78db37b Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.588420 5050 generic.go:334] "Generic (PLEG): container finished" podID="49ce1909-4188-4297-ba99-660320bdef11" containerID="1f4eb9288d063fdd62051f6ce09637538957b8ad83684396226abed53ed44020" exitCode=0 Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.588510 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-srmqm" event={"ID":"49ce1909-4188-4297-ba99-660320bdef11","Type":"ContainerDied","Data":"1f4eb9288d063fdd62051f6ce09637538957b8ad83684396226abed53ed44020"} Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.588766 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-srmqm" event={"ID":"49ce1909-4188-4297-ba99-660320bdef11","Type":"ContainerStarted","Data":"08b8c120ad7341dd95503ab111452657dde418c794d98bd47262ca549affcaf3"} Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.590316 5050 generic.go:334] "Generic (PLEG): container finished" podID="921e218b-b6a2-47ff-99e0-1b5199015acf" containerID="7c830414db5d5ba00380225862a45291d788bc737ceafb7b9eab8d09fa1f5def" exitCode=0 Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.590351 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1bdf-account-create-update-tqmmv" event={"ID":"921e218b-b6a2-47ff-99e0-1b5199015acf","Type":"ContainerDied","Data":"7c830414db5d5ba00380225862a45291d788bc737ceafb7b9eab8d09fa1f5def"} Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.590373 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1bdf-account-create-update-tqmmv" event={"ID":"921e218b-b6a2-47ff-99e0-1b5199015acf","Type":"ContainerStarted","Data":"ffc57be78ce29330a18ec44be8b07d0c00ca3f14903e7e5752f0ecbda78db37b"} Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.751735 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tq9nt"] Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.752996 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tq9nt" Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.755414 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.760175 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tq9nt"] Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.760947 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n8wn\" (UniqueName: \"kubernetes.io/projected/a806bbfa-805b-4d52-b880-f5cb1d7cea46-kube-api-access-4n8wn\") pod \"root-account-create-update-tq9nt\" (UID: \"a806bbfa-805b-4d52-b880-f5cb1d7cea46\") " pod="openstack/root-account-create-update-tq9nt" Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.761122 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a806bbfa-805b-4d52-b880-f5cb1d7cea46-operator-scripts\") pod \"root-account-create-update-tq9nt\" (UID: \"a806bbfa-805b-4d52-b880-f5cb1d7cea46\") " pod="openstack/root-account-create-update-tq9nt" Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.862310 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n8wn\" (UniqueName: \"kubernetes.io/projected/a806bbfa-805b-4d52-b880-f5cb1d7cea46-kube-api-access-4n8wn\") pod \"root-account-create-update-tq9nt\" (UID: \"a806bbfa-805b-4d52-b880-f5cb1d7cea46\") " pod="openstack/root-account-create-update-tq9nt" Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.862706 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a806bbfa-805b-4d52-b880-f5cb1d7cea46-operator-scripts\") pod \"root-account-create-update-tq9nt\" (UID: \"a806bbfa-805b-4d52-b880-f5cb1d7cea46\") " pod="openstack/root-account-create-update-tq9nt" Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.864390 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a806bbfa-805b-4d52-b880-f5cb1d7cea46-operator-scripts\") pod \"root-account-create-update-tq9nt\" (UID: \"a806bbfa-805b-4d52-b880-f5cb1d7cea46\") " pod="openstack/root-account-create-update-tq9nt" Jan 31 05:39:38 crc kubenswrapper[5050]: I0131 05:39:38.884467 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n8wn\" (UniqueName: \"kubernetes.io/projected/a806bbfa-805b-4d52-b880-f5cb1d7cea46-kube-api-access-4n8wn\") pod \"root-account-create-update-tq9nt\" (UID: \"a806bbfa-805b-4d52-b880-f5cb1d7cea46\") " pod="openstack/root-account-create-update-tq9nt" Jan 31 05:39:39 crc kubenswrapper[5050]: I0131 05:39:39.086826 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tq9nt" Jan 31 05:39:39 crc kubenswrapper[5050]: I0131 05:39:39.616383 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tq9nt"] Jan 31 05:39:39 crc kubenswrapper[5050]: W0131 05:39:39.620321 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda806bbfa_805b_4d52_b880_f5cb1d7cea46.slice/crio-71a4bc61f533e5319d28ee6e2b6dd76edf19261d3d704b12150a2882a23a6259 WatchSource:0}: Error finding container 71a4bc61f533e5319d28ee6e2b6dd76edf19261d3d704b12150a2882a23a6259: Status 404 returned error can't find the container with id 71a4bc61f533e5319d28ee6e2b6dd76edf19261d3d704b12150a2882a23a6259 Jan 31 05:39:39 crc kubenswrapper[5050]: I0131 05:39:39.913058 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-srmqm" Jan 31 05:39:39 crc kubenswrapper[5050]: I0131 05:39:39.946288 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1bdf-account-create-update-tqmmv" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.084606 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49ce1909-4188-4297-ba99-660320bdef11-operator-scripts\") pod \"49ce1909-4188-4297-ba99-660320bdef11\" (UID: \"49ce1909-4188-4297-ba99-660320bdef11\") " Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.084855 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drq74\" (UniqueName: \"kubernetes.io/projected/921e218b-b6a2-47ff-99e0-1b5199015acf-kube-api-access-drq74\") pod \"921e218b-b6a2-47ff-99e0-1b5199015acf\" (UID: \"921e218b-b6a2-47ff-99e0-1b5199015acf\") " Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.084995 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjwf7\" (UniqueName: \"kubernetes.io/projected/49ce1909-4188-4297-ba99-660320bdef11-kube-api-access-pjwf7\") pod \"49ce1909-4188-4297-ba99-660320bdef11\" (UID: \"49ce1909-4188-4297-ba99-660320bdef11\") " Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.085587 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/921e218b-b6a2-47ff-99e0-1b5199015acf-operator-scripts\") pod \"921e218b-b6a2-47ff-99e0-1b5199015acf\" (UID: \"921e218b-b6a2-47ff-99e0-1b5199015acf\") " Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.087055 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921e218b-b6a2-47ff-99e0-1b5199015acf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "921e218b-b6a2-47ff-99e0-1b5199015acf" (UID: "921e218b-b6a2-47ff-99e0-1b5199015acf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.087253 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49ce1909-4188-4297-ba99-660320bdef11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49ce1909-4188-4297-ba99-660320bdef11" (UID: "49ce1909-4188-4297-ba99-660320bdef11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.092989 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921e218b-b6a2-47ff-99e0-1b5199015acf-kube-api-access-drq74" (OuterVolumeSpecName: "kube-api-access-drq74") pod "921e218b-b6a2-47ff-99e0-1b5199015acf" (UID: "921e218b-b6a2-47ff-99e0-1b5199015acf"). InnerVolumeSpecName "kube-api-access-drq74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.093271 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ce1909-4188-4297-ba99-660320bdef11-kube-api-access-pjwf7" (OuterVolumeSpecName: "kube-api-access-pjwf7") pod "49ce1909-4188-4297-ba99-660320bdef11" (UID: "49ce1909-4188-4297-ba99-660320bdef11"). InnerVolumeSpecName "kube-api-access-pjwf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.189455 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49ce1909-4188-4297-ba99-660320bdef11-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.189508 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drq74\" (UniqueName: \"kubernetes.io/projected/921e218b-b6a2-47ff-99e0-1b5199015acf-kube-api-access-drq74\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.189523 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjwf7\" (UniqueName: \"kubernetes.io/projected/49ce1909-4188-4297-ba99-660320bdef11-kube-api-access-pjwf7\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.189536 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/921e218b-b6a2-47ff-99e0-1b5199015acf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.614864 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-srmqm" event={"ID":"49ce1909-4188-4297-ba99-660320bdef11","Type":"ContainerDied","Data":"08b8c120ad7341dd95503ab111452657dde418c794d98bd47262ca549affcaf3"} Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.614911 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b8c120ad7341dd95503ab111452657dde418c794d98bd47262ca549affcaf3" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.615019 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-srmqm" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.617635 5050 generic.go:334] "Generic (PLEG): container finished" podID="a806bbfa-805b-4d52-b880-f5cb1d7cea46" containerID="8d827da5146bb4251f1e46f43bb0de8eb8a0e96d63a8107cd6ce87e008100091" exitCode=0 Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.617681 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tq9nt" event={"ID":"a806bbfa-805b-4d52-b880-f5cb1d7cea46","Type":"ContainerDied","Data":"8d827da5146bb4251f1e46f43bb0de8eb8a0e96d63a8107cd6ce87e008100091"} Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.617741 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tq9nt" event={"ID":"a806bbfa-805b-4d52-b880-f5cb1d7cea46","Type":"ContainerStarted","Data":"71a4bc61f533e5319d28ee6e2b6dd76edf19261d3d704b12150a2882a23a6259"} Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.620255 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1bdf-account-create-update-tqmmv" event={"ID":"921e218b-b6a2-47ff-99e0-1b5199015acf","Type":"ContainerDied","Data":"ffc57be78ce29330a18ec44be8b07d0c00ca3f14903e7e5752f0ecbda78db37b"} Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.620281 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffc57be78ce29330a18ec44be8b07d0c00ca3f14903e7e5752f0ecbda78db37b" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.620330 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1bdf-account-create-update-tqmmv" Jan 31 05:39:40 crc kubenswrapper[5050]: I0131 05:39:40.662624 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 31 05:39:41 crc kubenswrapper[5050]: I0131 05:39:41.938044 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tq9nt" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.029369 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a806bbfa-805b-4d52-b880-f5cb1d7cea46-operator-scripts\") pod \"a806bbfa-805b-4d52-b880-f5cb1d7cea46\" (UID: \"a806bbfa-805b-4d52-b880-f5cb1d7cea46\") " Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.029729 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n8wn\" (UniqueName: \"kubernetes.io/projected/a806bbfa-805b-4d52-b880-f5cb1d7cea46-kube-api-access-4n8wn\") pod \"a806bbfa-805b-4d52-b880-f5cb1d7cea46\" (UID: \"a806bbfa-805b-4d52-b880-f5cb1d7cea46\") " Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.030128 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a806bbfa-805b-4d52-b880-f5cb1d7cea46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a806bbfa-805b-4d52-b880-f5cb1d7cea46" (UID: "a806bbfa-805b-4d52-b880-f5cb1d7cea46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.036463 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a806bbfa-805b-4d52-b880-f5cb1d7cea46-kube-api-access-4n8wn" (OuterVolumeSpecName: "kube-api-access-4n8wn") pod "a806bbfa-805b-4d52-b880-f5cb1d7cea46" (UID: "a806bbfa-805b-4d52-b880-f5cb1d7cea46"). InnerVolumeSpecName "kube-api-access-4n8wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.127979 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-b2zd6"] Jan 31 05:39:42 crc kubenswrapper[5050]: E0131 05:39:42.128499 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ce1909-4188-4297-ba99-660320bdef11" containerName="mariadb-database-create" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.128529 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ce1909-4188-4297-ba99-660320bdef11" containerName="mariadb-database-create" Jan 31 05:39:42 crc kubenswrapper[5050]: E0131 05:39:42.128577 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921e218b-b6a2-47ff-99e0-1b5199015acf" containerName="mariadb-account-create-update" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.128590 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="921e218b-b6a2-47ff-99e0-1b5199015acf" containerName="mariadb-account-create-update" Jan 31 05:39:42 crc kubenswrapper[5050]: E0131 05:39:42.128622 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a806bbfa-805b-4d52-b880-f5cb1d7cea46" containerName="mariadb-account-create-update" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.128636 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a806bbfa-805b-4d52-b880-f5cb1d7cea46" containerName="mariadb-account-create-update" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.128906 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a806bbfa-805b-4d52-b880-f5cb1d7cea46" containerName="mariadb-account-create-update" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.128937 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="921e218b-b6a2-47ff-99e0-1b5199015acf" containerName="mariadb-account-create-update" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.129009 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ce1909-4188-4297-ba99-660320bdef11" containerName="mariadb-database-create" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.129781 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.131305 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a806bbfa-805b-4d52-b880-f5cb1d7cea46-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.131350 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n8wn\" (UniqueName: \"kubernetes.io/projected/a806bbfa-805b-4d52-b880-f5cb1d7cea46-kube-api-access-4n8wn\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.134595 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.134901 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hmzgk" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.140069 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-b2zd6"] Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.232454 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-db-sync-config-data\") pod \"glance-db-sync-b2zd6\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.232556 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-config-data\") pod \"glance-db-sync-b2zd6\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.232590 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6b2s\" (UniqueName: \"kubernetes.io/projected/e67e4334-32bb-4e4f-9dad-8209b4e86495-kube-api-access-t6b2s\") pod \"glance-db-sync-b2zd6\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.232616 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-combined-ca-bundle\") pod \"glance-db-sync-b2zd6\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.334223 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-config-data\") pod \"glance-db-sync-b2zd6\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.334316 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6b2s\" (UniqueName: \"kubernetes.io/projected/e67e4334-32bb-4e4f-9dad-8209b4e86495-kube-api-access-t6b2s\") pod \"glance-db-sync-b2zd6\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.334349 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-combined-ca-bundle\") pod \"glance-db-sync-b2zd6\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.334462 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-db-sync-config-data\") pod \"glance-db-sync-b2zd6\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.338643 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-db-sync-config-data\") pod \"glance-db-sync-b2zd6\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.338879 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-config-data\") pod \"glance-db-sync-b2zd6\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.341035 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-combined-ca-bundle\") pod \"glance-db-sync-b2zd6\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.350834 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6b2s\" (UniqueName: \"kubernetes.io/projected/e67e4334-32bb-4e4f-9dad-8209b4e86495-kube-api-access-t6b2s\") pod \"glance-db-sync-b2zd6\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.453762 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b2zd6" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.641299 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tq9nt" event={"ID":"a806bbfa-805b-4d52-b880-f5cb1d7cea46","Type":"ContainerDied","Data":"71a4bc61f533e5319d28ee6e2b6dd76edf19261d3d704b12150a2882a23a6259"} Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.641708 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a4bc61f533e5319d28ee6e2b6dd76edf19261d3d704b12150a2882a23a6259" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.641329 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tq9nt" Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.708884 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-grlfx" podUID="5eca93ff-3985-4e89-9254-a5d2a94793d6" containerName="ovn-controller" probeResult="failure" output=< Jan 31 05:39:42 crc kubenswrapper[5050]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 05:39:42 crc kubenswrapper[5050]: > Jan 31 05:39:42 crc kubenswrapper[5050]: I0131 05:39:42.806964 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-b2zd6"] Jan 31 05:39:43 crc kubenswrapper[5050]: I0131 05:39:43.650249 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b2zd6" event={"ID":"e67e4334-32bb-4e4f-9dad-8209b4e86495","Type":"ContainerStarted","Data":"eebd4bd22e196726abec9922b92d5ab5dcb6a76dbe029917c672c8678ec46a13"} Jan 31 05:39:45 crc kubenswrapper[5050]: I0131 05:39:45.126881 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tq9nt"] Jan 31 05:39:45 crc kubenswrapper[5050]: I0131 05:39:45.132890 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tq9nt"] Jan 31 05:39:45 crc kubenswrapper[5050]: I0131 05:39:45.667042 5050 generic.go:334] "Generic (PLEG): container finished" podID="b3fa70dc-40c9-4b8a-8239-d785f140d5d2" containerID="42ffb73fdfb7465785c2d4a666d37d70e31b4cc380a7d5f5bf700de51d819c7d" exitCode=0 Jan 31 05:39:45 crc kubenswrapper[5050]: I0131 05:39:45.667087 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3fa70dc-40c9-4b8a-8239-d785f140d5d2","Type":"ContainerDied","Data":"42ffb73fdfb7465785c2d4a666d37d70e31b4cc380a7d5f5bf700de51d819c7d"} Jan 31 05:39:45 crc kubenswrapper[5050]: I0131 05:39:45.768633 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a806bbfa-805b-4d52-b880-f5cb1d7cea46" path="/var/lib/kubelet/pods/a806bbfa-805b-4d52-b880-f5cb1d7cea46/volumes" Jan 31 05:39:46 crc kubenswrapper[5050]: I0131 05:39:46.679071 5050 generic.go:334] "Generic (PLEG): container finished" podID="faec33cd-ecd1-4244-abb0-c5a27441abd2" containerID="908370e323fbd20dcd8765438ac1ee820a6d0d5bbfe33c1d484ee9ff5821aa73" exitCode=0 Jan 31 05:39:46 crc kubenswrapper[5050]: I0131 05:39:46.679156 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"faec33cd-ecd1-4244-abb0-c5a27441abd2","Type":"ContainerDied","Data":"908370e323fbd20dcd8765438ac1ee820a6d0d5bbfe33c1d484ee9ff5821aa73"} Jan 31 05:39:46 crc kubenswrapper[5050]: I0131 05:39:46.685280 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3fa70dc-40c9-4b8a-8239-d785f140d5d2","Type":"ContainerStarted","Data":"13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1"} Jan 31 05:39:46 crc kubenswrapper[5050]: I0131 05:39:46.685636 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 05:39:46 crc kubenswrapper[5050]: I0131 05:39:46.739830 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.022325621 podStartE2EDuration="59.739807819s" podCreationTimestamp="2026-01-31 05:38:47 +0000 UTC" firstStartedPulling="2026-01-31 05:39:02.223934159 +0000 UTC m=+1067.273095765" lastFinishedPulling="2026-01-31 05:39:10.941416367 +0000 UTC m=+1075.990577963" observedRunningTime="2026-01-31 05:39:46.727135652 +0000 UTC m=+1111.776297268" watchObservedRunningTime="2026-01-31 05:39:46.739807819 +0000 UTC m=+1111.788969415" Jan 31 05:39:47 crc kubenswrapper[5050]: I0131 05:39:47.697673 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"faec33cd-ecd1-4244-abb0-c5a27441abd2","Type":"ContainerStarted","Data":"c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702"} Jan 31 05:39:47 crc kubenswrapper[5050]: I0131 05:39:47.698294 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:39:47 crc kubenswrapper[5050]: I0131 05:39:47.712778 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-grlfx" podUID="5eca93ff-3985-4e89-9254-a5d2a94793d6" containerName="ovn-controller" probeResult="failure" output=< Jan 31 05:39:47 crc kubenswrapper[5050]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 05:39:47 crc kubenswrapper[5050]: > Jan 31 05:39:47 crc kubenswrapper[5050]: I0131 05:39:47.728716 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.090466581 podStartE2EDuration="1m0.728653058s" podCreationTimestamp="2026-01-31 05:38:47 +0000 UTC" firstStartedPulling="2026-01-31 05:39:01.983369905 +0000 UTC m=+1067.032531501" lastFinishedPulling="2026-01-31 05:39:10.621556372 +0000 UTC m=+1075.670717978" observedRunningTime="2026-01-31 05:39:47.727800296 +0000 UTC m=+1112.776961902" watchObservedRunningTime="2026-01-31 05:39:47.728653058 +0000 UTC m=+1112.777814654" Jan 31 05:39:47 crc kubenswrapper[5050]: I0131 05:39:47.753737 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:39:47 crc kubenswrapper[5050]: I0131 05:39:47.767339 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-p2rnn" Jan 31 05:39:47 crc kubenswrapper[5050]: I0131 05:39:47.980661 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-grlfx-config-dh4cs"] Jan 31 05:39:47 crc kubenswrapper[5050]: I0131 05:39:47.981580 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:47 crc kubenswrapper[5050]: I0131 05:39:47.984285 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.002033 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-grlfx-config-dh4cs"] Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.141235 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-run\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.141305 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/235cefe7-256c-4e52-a52d-be9fefef4b4b-additional-scripts\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.141372 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb26r\" (UniqueName: \"kubernetes.io/projected/235cefe7-256c-4e52-a52d-be9fefef4b4b-kube-api-access-gb26r\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.141467 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-run-ovn\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.141512 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-log-ovn\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.141570 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/235cefe7-256c-4e52-a52d-be9fefef4b4b-scripts\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.243144 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-run\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.243187 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/235cefe7-256c-4e52-a52d-be9fefef4b4b-additional-scripts\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.243228 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb26r\" (UniqueName: \"kubernetes.io/projected/235cefe7-256c-4e52-a52d-be9fefef4b4b-kube-api-access-gb26r\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.243318 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-run-ovn\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.243371 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-log-ovn\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.243397 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/235cefe7-256c-4e52-a52d-be9fefef4b4b-scripts\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.243641 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-run\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.243688 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-log-ovn\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.243641 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-run-ovn\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.244236 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/235cefe7-256c-4e52-a52d-be9fefef4b4b-additional-scripts\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.245349 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/235cefe7-256c-4e52-a52d-be9fefef4b4b-scripts\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.294705 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb26r\" (UniqueName: \"kubernetes.io/projected/235cefe7-256c-4e52-a52d-be9fefef4b4b-kube-api-access-gb26r\") pod \"ovn-controller-grlfx-config-dh4cs\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:48 crc kubenswrapper[5050]: I0131 05:39:48.295731 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:50 crc kubenswrapper[5050]: I0131 05:39:50.155633 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lw5z6"] Jan 31 05:39:50 crc kubenswrapper[5050]: I0131 05:39:50.156920 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lw5z6" Jan 31 05:39:50 crc kubenswrapper[5050]: I0131 05:39:50.160972 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 31 05:39:50 crc kubenswrapper[5050]: I0131 05:39:50.193045 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lw5z6"] Jan 31 05:39:50 crc kubenswrapper[5050]: I0131 05:39:50.277739 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7-operator-scripts\") pod \"root-account-create-update-lw5z6\" (UID: \"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7\") " pod="openstack/root-account-create-update-lw5z6" Jan 31 05:39:50 crc kubenswrapper[5050]: I0131 05:39:50.277855 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgvkr\" (UniqueName: \"kubernetes.io/projected/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7-kube-api-access-cgvkr\") pod \"root-account-create-update-lw5z6\" (UID: \"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7\") " pod="openstack/root-account-create-update-lw5z6" Jan 31 05:39:50 crc kubenswrapper[5050]: I0131 05:39:50.379014 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgvkr\" (UniqueName: \"kubernetes.io/projected/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7-kube-api-access-cgvkr\") pod \"root-account-create-update-lw5z6\" (UID: \"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7\") " pod="openstack/root-account-create-update-lw5z6" Jan 31 05:39:50 crc kubenswrapper[5050]: I0131 05:39:50.379121 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7-operator-scripts\") pod \"root-account-create-update-lw5z6\" (UID: \"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7\") " pod="openstack/root-account-create-update-lw5z6" Jan 31 05:39:50 crc kubenswrapper[5050]: I0131 05:39:50.380053 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7-operator-scripts\") pod \"root-account-create-update-lw5z6\" (UID: \"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7\") " pod="openstack/root-account-create-update-lw5z6" Jan 31 05:39:50 crc kubenswrapper[5050]: I0131 05:39:50.400271 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgvkr\" (UniqueName: \"kubernetes.io/projected/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7-kube-api-access-cgvkr\") pod \"root-account-create-update-lw5z6\" (UID: \"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7\") " pod="openstack/root-account-create-update-lw5z6" Jan 31 05:39:50 crc kubenswrapper[5050]: I0131 05:39:50.501577 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lw5z6" Jan 31 05:39:52 crc kubenswrapper[5050]: I0131 05:39:52.697498 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-grlfx" podUID="5eca93ff-3985-4e89-9254-a5d2a94793d6" containerName="ovn-controller" probeResult="failure" output=< Jan 31 05:39:52 crc kubenswrapper[5050]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 05:39:52 crc kubenswrapper[5050]: > Jan 31 05:39:55 crc kubenswrapper[5050]: I0131 05:39:55.273009 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lw5z6"] Jan 31 05:39:55 crc kubenswrapper[5050]: I0131 05:39:55.330518 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-grlfx-config-dh4cs"] Jan 31 05:39:55 crc kubenswrapper[5050]: I0131 05:39:55.790288 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grlfx-config-dh4cs" event={"ID":"235cefe7-256c-4e52-a52d-be9fefef4b4b","Type":"ContainerStarted","Data":"46fbaf7c19b33f38f91ffdd547e097892b48ff5b4c4b0036b2be3104368a2239"} Jan 31 05:39:55 crc kubenswrapper[5050]: I0131 05:39:55.790631 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grlfx-config-dh4cs" event={"ID":"235cefe7-256c-4e52-a52d-be9fefef4b4b","Type":"ContainerStarted","Data":"eece0cbb5043f33726f26c05b9e4eea875dab88aa2ff4065d2a2ae197df54864"} Jan 31 05:39:55 crc kubenswrapper[5050]: I0131 05:39:55.793540 5050 generic.go:334] "Generic (PLEG): container finished" podID="ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7" containerID="b11a0ea164b5e09c0832f6f93d56876a6b8e8e8ca063c6facecb91057049549e" exitCode=0 Jan 31 05:39:55 crc kubenswrapper[5050]: I0131 05:39:55.793631 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lw5z6" event={"ID":"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7","Type":"ContainerDied","Data":"b11a0ea164b5e09c0832f6f93d56876a6b8e8e8ca063c6facecb91057049549e"} Jan 31 05:39:55 crc kubenswrapper[5050]: I0131 05:39:55.793659 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lw5z6" event={"ID":"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7","Type":"ContainerStarted","Data":"c99fe8a05d462a90a08e90e3170752b3464940a5dfb378cd5e2aaec5f37bab18"} Jan 31 05:39:55 crc kubenswrapper[5050]: I0131 05:39:55.796159 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b2zd6" event={"ID":"e67e4334-32bb-4e4f-9dad-8209b4e86495","Type":"ContainerStarted","Data":"87e5a11dbf69d0073fb361ff2299b3c44b0d0a301c33e1d1ad00f6b9274ea382"} Jan 31 05:39:55 crc kubenswrapper[5050]: I0131 05:39:55.816484 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-grlfx-config-dh4cs" podStartSLOduration=8.816467003 podStartE2EDuration="8.816467003s" podCreationTimestamp="2026-01-31 05:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:39:55.810584971 +0000 UTC m=+1120.859746567" watchObservedRunningTime="2026-01-31 05:39:55.816467003 +0000 UTC m=+1120.865628599" Jan 31 05:39:55 crc kubenswrapper[5050]: I0131 05:39:55.843058 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-b2zd6" podStartSLOduration=1.800812673 podStartE2EDuration="13.843039478s" podCreationTimestamp="2026-01-31 05:39:42 +0000 UTC" firstStartedPulling="2026-01-31 05:39:42.818236036 +0000 UTC m=+1107.867397632" lastFinishedPulling="2026-01-31 05:39:54.860462841 +0000 UTC m=+1119.909624437" observedRunningTime="2026-01-31 05:39:55.83998821 +0000 UTC m=+1120.889149836" watchObservedRunningTime="2026-01-31 05:39:55.843039478 +0000 UTC m=+1120.892201084" Jan 31 05:39:56 crc kubenswrapper[5050]: I0131 05:39:56.810535 5050 generic.go:334] "Generic (PLEG): container finished" podID="235cefe7-256c-4e52-a52d-be9fefef4b4b" containerID="46fbaf7c19b33f38f91ffdd547e097892b48ff5b4c4b0036b2be3104368a2239" exitCode=0 Jan 31 05:39:56 crc kubenswrapper[5050]: I0131 05:39:56.810604 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grlfx-config-dh4cs" event={"ID":"235cefe7-256c-4e52-a52d-be9fefef4b4b","Type":"ContainerDied","Data":"46fbaf7c19b33f38f91ffdd547e097892b48ff5b4c4b0036b2be3104368a2239"} Jan 31 05:39:57 crc kubenswrapper[5050]: I0131 05:39:57.132049 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lw5z6" Jan 31 05:39:57 crc kubenswrapper[5050]: I0131 05:39:57.312661 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7-operator-scripts\") pod \"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7\" (UID: \"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7\") " Jan 31 05:39:57 crc kubenswrapper[5050]: I0131 05:39:57.312808 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgvkr\" (UniqueName: \"kubernetes.io/projected/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7-kube-api-access-cgvkr\") pod \"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7\" (UID: \"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7\") " Jan 31 05:39:57 crc kubenswrapper[5050]: I0131 05:39:57.314780 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7" (UID: "ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:57 crc kubenswrapper[5050]: I0131 05:39:57.325344 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7-kube-api-access-cgvkr" (OuterVolumeSpecName: "kube-api-access-cgvkr") pod "ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7" (UID: "ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7"). InnerVolumeSpecName "kube-api-access-cgvkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:57 crc kubenswrapper[5050]: I0131 05:39:57.415486 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:57 crc kubenswrapper[5050]: I0131 05:39:57.415532 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgvkr\" (UniqueName: \"kubernetes.io/projected/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7-kube-api-access-cgvkr\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:57 crc kubenswrapper[5050]: I0131 05:39:57.726882 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-grlfx" Jan 31 05:39:57 crc kubenswrapper[5050]: I0131 05:39:57.828146 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lw5z6" event={"ID":"ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7","Type":"ContainerDied","Data":"c99fe8a05d462a90a08e90e3170752b3464940a5dfb378cd5e2aaec5f37bab18"} Jan 31 05:39:57 crc kubenswrapper[5050]: I0131 05:39:57.828183 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c99fe8a05d462a90a08e90e3170752b3464940a5dfb378cd5e2aaec5f37bab18" Jan 31 05:39:57 crc kubenswrapper[5050]: I0131 05:39:57.828195 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lw5z6" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.141566 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.230232 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-run-ovn\") pod \"235cefe7-256c-4e52-a52d-be9fefef4b4b\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.230309 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/235cefe7-256c-4e52-a52d-be9fefef4b4b-additional-scripts\") pod \"235cefe7-256c-4e52-a52d-be9fefef4b4b\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.230364 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb26r\" (UniqueName: \"kubernetes.io/projected/235cefe7-256c-4e52-a52d-be9fefef4b4b-kube-api-access-gb26r\") pod \"235cefe7-256c-4e52-a52d-be9fefef4b4b\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.230402 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-run\") pod \"235cefe7-256c-4e52-a52d-be9fefef4b4b\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.230468 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-log-ovn\") pod \"235cefe7-256c-4e52-a52d-be9fefef4b4b\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.230539 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/235cefe7-256c-4e52-a52d-be9fefef4b4b-scripts\") pod \"235cefe7-256c-4e52-a52d-be9fefef4b4b\" (UID: \"235cefe7-256c-4e52-a52d-be9fefef4b4b\") " Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.230537 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-run" (OuterVolumeSpecName: "var-run") pod "235cefe7-256c-4e52-a52d-be9fefef4b4b" (UID: "235cefe7-256c-4e52-a52d-be9fefef4b4b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.230588 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "235cefe7-256c-4e52-a52d-be9fefef4b4b" (UID: "235cefe7-256c-4e52-a52d-be9fefef4b4b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.230850 5050 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.230870 5050 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.231219 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/235cefe7-256c-4e52-a52d-be9fefef4b4b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "235cefe7-256c-4e52-a52d-be9fefef4b4b" (UID: "235cefe7-256c-4e52-a52d-be9fefef4b4b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.230407 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "235cefe7-256c-4e52-a52d-be9fefef4b4b" (UID: "235cefe7-256c-4e52-a52d-be9fefef4b4b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.231594 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/235cefe7-256c-4e52-a52d-be9fefef4b4b-scripts" (OuterVolumeSpecName: "scripts") pod "235cefe7-256c-4e52-a52d-be9fefef4b4b" (UID: "235cefe7-256c-4e52-a52d-be9fefef4b4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.235981 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/235cefe7-256c-4e52-a52d-be9fefef4b4b-kube-api-access-gb26r" (OuterVolumeSpecName: "kube-api-access-gb26r") pod "235cefe7-256c-4e52-a52d-be9fefef4b4b" (UID: "235cefe7-256c-4e52-a52d-be9fefef4b4b"). InnerVolumeSpecName "kube-api-access-gb26r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.333119 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/235cefe7-256c-4e52-a52d-be9fefef4b4b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.333164 5050 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/235cefe7-256c-4e52-a52d-be9fefef4b4b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.333183 5050 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/235cefe7-256c-4e52-a52d-be9fefef4b4b-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.333202 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb26r\" (UniqueName: \"kubernetes.io/projected/235cefe7-256c-4e52-a52d-be9fefef4b4b-kube-api-access-gb26r\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.432238 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-grlfx-config-dh4cs"] Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.444606 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-grlfx-config-dh4cs"] Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.552279 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-grlfx-config-x47zp"] Jan 31 05:39:58 crc kubenswrapper[5050]: E0131 05:39:58.552584 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235cefe7-256c-4e52-a52d-be9fefef4b4b" containerName="ovn-config" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.552599 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="235cefe7-256c-4e52-a52d-be9fefef4b4b" containerName="ovn-config" Jan 31 05:39:58 crc kubenswrapper[5050]: E0131 05:39:58.552623 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7" containerName="mariadb-account-create-update" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.552630 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7" containerName="mariadb-account-create-update" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.552774 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="235cefe7-256c-4e52-a52d-be9fefef4b4b" containerName="ovn-config" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.552795 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7" containerName="mariadb-account-create-update" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.553277 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.573979 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-grlfx-config-x47zp"] Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.625166 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.637196 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e6912f1-f5de-4e40-8dba-4e2d4faee091-scripts\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.637364 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-run-ovn\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.637531 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6912f1-f5de-4e40-8dba-4e2d4faee091-additional-scripts\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.637591 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-run\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.637679 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-log-ovn\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.637721 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9fz\" (UniqueName: \"kubernetes.io/projected/6e6912f1-f5de-4e40-8dba-4e2d4faee091-kube-api-access-xg9fz\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.741011 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e6912f1-f5de-4e40-8dba-4e2d4faee091-scripts\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.741190 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-run-ovn\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.741265 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6912f1-f5de-4e40-8dba-4e2d4faee091-additional-scripts\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.741289 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-run\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.741429 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-log-ovn\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.741468 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9fz\" (UniqueName: \"kubernetes.io/projected/6e6912f1-f5de-4e40-8dba-4e2d4faee091-kube-api-access-xg9fz\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.742116 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-run\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.742303 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-log-ovn\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.742445 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-run-ovn\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.742862 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e6912f1-f5de-4e40-8dba-4e2d4faee091-scripts\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.743317 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6912f1-f5de-4e40-8dba-4e2d4faee091-additional-scripts\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.764144 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9fz\" (UniqueName: \"kubernetes.io/projected/6e6912f1-f5de-4e40-8dba-4e2d4faee091-kube-api-access-xg9fz\") pod \"ovn-controller-grlfx-config-x47zp\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.837653 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eece0cbb5043f33726f26c05b9e4eea875dab88aa2ff4065d2a2ae197df54864" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.837960 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grlfx-config-dh4cs" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.867762 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.910208 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.977026 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-r5d7p"] Jan 31 05:39:58 crc kubenswrapper[5050]: I0131 05:39:58.978306 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-r5d7p" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.031827 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-r5d7p"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.049015 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c72l\" (UniqueName: \"kubernetes.io/projected/f25be051-f6a0-486d-a204-59b3f33af8c8-kube-api-access-9c72l\") pod \"cinder-db-create-r5d7p\" (UID: \"f25be051-f6a0-486d-a204-59b3f33af8c8\") " pod="openstack/cinder-db-create-r5d7p" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.049058 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f25be051-f6a0-486d-a204-59b3f33af8c8-operator-scripts\") pod \"cinder-db-create-r5d7p\" (UID: \"f25be051-f6a0-486d-a204-59b3f33af8c8\") " pod="openstack/cinder-db-create-r5d7p" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.097691 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jpmz7"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.098820 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jpmz7" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.114286 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b1ec-account-create-update-p5wzj"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.115560 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b1ec-account-create-update-p5wzj" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.118898 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jpmz7"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.122071 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.135839 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b1ec-account-create-update-p5wzj"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.151089 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c72l\" (UniqueName: \"kubernetes.io/projected/f25be051-f6a0-486d-a204-59b3f33af8c8-kube-api-access-9c72l\") pod \"cinder-db-create-r5d7p\" (UID: \"f25be051-f6a0-486d-a204-59b3f33af8c8\") " pod="openstack/cinder-db-create-r5d7p" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.151150 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f25be051-f6a0-486d-a204-59b3f33af8c8-operator-scripts\") pod \"cinder-db-create-r5d7p\" (UID: \"f25be051-f6a0-486d-a204-59b3f33af8c8\") " pod="openstack/cinder-db-create-r5d7p" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.151191 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dbc6186-a3de-418c-a213-3064164fc5bc-operator-scripts\") pod \"cinder-b1ec-account-create-update-p5wzj\" (UID: \"5dbc6186-a3de-418c-a213-3064164fc5bc\") " pod="openstack/cinder-b1ec-account-create-update-p5wzj" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.151269 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5rgp\" (UniqueName: \"kubernetes.io/projected/5dbc6186-a3de-418c-a213-3064164fc5bc-kube-api-access-c5rgp\") pod \"cinder-b1ec-account-create-update-p5wzj\" (UID: \"5dbc6186-a3de-418c-a213-3064164fc5bc\") " pod="openstack/cinder-b1ec-account-create-update-p5wzj" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.151304 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b5l7\" (UniqueName: \"kubernetes.io/projected/e81c149b-a523-42c5-8d6b-2eefde46201a-kube-api-access-4b5l7\") pod \"barbican-db-create-jpmz7\" (UID: \"e81c149b-a523-42c5-8d6b-2eefde46201a\") " pod="openstack/barbican-db-create-jpmz7" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.151371 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e81c149b-a523-42c5-8d6b-2eefde46201a-operator-scripts\") pod \"barbican-db-create-jpmz7\" (UID: \"e81c149b-a523-42c5-8d6b-2eefde46201a\") " pod="openstack/barbican-db-create-jpmz7" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.152338 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f25be051-f6a0-486d-a204-59b3f33af8c8-operator-scripts\") pod \"cinder-db-create-r5d7p\" (UID: \"f25be051-f6a0-486d-a204-59b3f33af8c8\") " pod="openstack/cinder-db-create-r5d7p" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.189436 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c72l\" (UniqueName: \"kubernetes.io/projected/f25be051-f6a0-486d-a204-59b3f33af8c8-kube-api-access-9c72l\") pod \"cinder-db-create-r5d7p\" (UID: \"f25be051-f6a0-486d-a204-59b3f33af8c8\") " pod="openstack/cinder-db-create-r5d7p" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.230520 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8fca-account-create-update-zgmfr"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.231668 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8fca-account-create-update-zgmfr" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.236909 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8fca-account-create-update-zgmfr"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.238320 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.252789 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e81c149b-a523-42c5-8d6b-2eefde46201a-operator-scripts\") pod \"barbican-db-create-jpmz7\" (UID: \"e81c149b-a523-42c5-8d6b-2eefde46201a\") " pod="openstack/barbican-db-create-jpmz7" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.252891 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dbc6186-a3de-418c-a213-3064164fc5bc-operator-scripts\") pod \"cinder-b1ec-account-create-update-p5wzj\" (UID: \"5dbc6186-a3de-418c-a213-3064164fc5bc\") " pod="openstack/cinder-b1ec-account-create-update-p5wzj" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.252976 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5rgp\" (UniqueName: \"kubernetes.io/projected/5dbc6186-a3de-418c-a213-3064164fc5bc-kube-api-access-c5rgp\") pod \"cinder-b1ec-account-create-update-p5wzj\" (UID: \"5dbc6186-a3de-418c-a213-3064164fc5bc\") " pod="openstack/cinder-b1ec-account-create-update-p5wzj" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.252996 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b5l7\" (UniqueName: \"kubernetes.io/projected/e81c149b-a523-42c5-8d6b-2eefde46201a-kube-api-access-4b5l7\") pod \"barbican-db-create-jpmz7\" (UID: \"e81c149b-a523-42c5-8d6b-2eefde46201a\") " pod="openstack/barbican-db-create-jpmz7" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.258166 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dbc6186-a3de-418c-a213-3064164fc5bc-operator-scripts\") pod \"cinder-b1ec-account-create-update-p5wzj\" (UID: \"5dbc6186-a3de-418c-a213-3064164fc5bc\") " pod="openstack/cinder-b1ec-account-create-update-p5wzj" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.258321 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e81c149b-a523-42c5-8d6b-2eefde46201a-operator-scripts\") pod \"barbican-db-create-jpmz7\" (UID: \"e81c149b-a523-42c5-8d6b-2eefde46201a\") " pod="openstack/barbican-db-create-jpmz7" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.284493 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b5l7\" (UniqueName: \"kubernetes.io/projected/e81c149b-a523-42c5-8d6b-2eefde46201a-kube-api-access-4b5l7\") pod \"barbican-db-create-jpmz7\" (UID: \"e81c149b-a523-42c5-8d6b-2eefde46201a\") " pod="openstack/barbican-db-create-jpmz7" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.293539 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5rgp\" (UniqueName: \"kubernetes.io/projected/5dbc6186-a3de-418c-a213-3064164fc5bc-kube-api-access-c5rgp\") pod \"cinder-b1ec-account-create-update-p5wzj\" (UID: \"5dbc6186-a3de-418c-a213-3064164fc5bc\") " pod="openstack/cinder-b1ec-account-create-update-p5wzj" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.354652 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-2zxjh"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.354651 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea6a094-f9f7-4626-9241-c23f2d2685d7-operator-scripts\") pod \"barbican-8fca-account-create-update-zgmfr\" (UID: \"0ea6a094-f9f7-4626-9241-c23f2d2685d7\") " pod="openstack/barbican-8fca-account-create-update-zgmfr" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.354959 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkf2l\" (UniqueName: \"kubernetes.io/projected/0ea6a094-f9f7-4626-9241-c23f2d2685d7-kube-api-access-xkf2l\") pod \"barbican-8fca-account-create-update-zgmfr\" (UID: \"0ea6a094-f9f7-4626-9241-c23f2d2685d7\") " pod="openstack/barbican-8fca-account-create-update-zgmfr" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.356260 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.358345 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.358884 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.359780 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.360084 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qqp2b" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.366480 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-r5d7p" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.371800 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2zxjh"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.415029 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jpmz7" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.463260 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b1ec-account-create-update-p5wzj" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.466740 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea6a094-f9f7-4626-9241-c23f2d2685d7-operator-scripts\") pod \"barbican-8fca-account-create-update-zgmfr\" (UID: \"0ea6a094-f9f7-4626-9241-c23f2d2685d7\") " pod="openstack/barbican-8fca-account-create-update-zgmfr" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.466835 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkf2l\" (UniqueName: \"kubernetes.io/projected/0ea6a094-f9f7-4626-9241-c23f2d2685d7-kube-api-access-xkf2l\") pod \"barbican-8fca-account-create-update-zgmfr\" (UID: \"0ea6a094-f9f7-4626-9241-c23f2d2685d7\") " pod="openstack/barbican-8fca-account-create-update-zgmfr" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.466883 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqxxd\" (UniqueName: \"kubernetes.io/projected/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-kube-api-access-jqxxd\") pod \"keystone-db-sync-2zxjh\" (UID: \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\") " pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.466914 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-combined-ca-bundle\") pod \"keystone-db-sync-2zxjh\" (UID: \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\") " pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.466961 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-config-data\") pod \"keystone-db-sync-2zxjh\" (UID: \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\") " pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.473438 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea6a094-f9f7-4626-9241-c23f2d2685d7-operator-scripts\") pod \"barbican-8fca-account-create-update-zgmfr\" (UID: \"0ea6a094-f9f7-4626-9241-c23f2d2685d7\") " pod="openstack/barbican-8fca-account-create-update-zgmfr" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.478995 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lkhcw"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.480939 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lkhcw" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.495161 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lkhcw"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.504277 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkf2l\" (UniqueName: \"kubernetes.io/projected/0ea6a094-f9f7-4626-9241-c23f2d2685d7-kube-api-access-xkf2l\") pod \"barbican-8fca-account-create-update-zgmfr\" (UID: \"0ea6a094-f9f7-4626-9241-c23f2d2685d7\") " pod="openstack/barbican-8fca-account-create-update-zgmfr" Jan 31 05:39:59 crc kubenswrapper[5050]: W0131 05:39:59.523725 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e6912f1_f5de_4e40_8dba_4e2d4faee091.slice/crio-81674409fb142dd84460e4c49e91b6bde43e8b9b5c61581ff4374ee454bbc481 WatchSource:0}: Error finding container 81674409fb142dd84460e4c49e91b6bde43e8b9b5c61581ff4374ee454bbc481: Status 404 returned error can't find the container with id 81674409fb142dd84460e4c49e91b6bde43e8b9b5c61581ff4374ee454bbc481 Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.528633 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-grlfx-config-x47zp"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.558654 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8fca-account-create-update-zgmfr" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.570771 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7trp8\" (UniqueName: \"kubernetes.io/projected/2b522428-69eb-4f45-97c5-dc71f66011d6-kube-api-access-7trp8\") pod \"neutron-db-create-lkhcw\" (UID: \"2b522428-69eb-4f45-97c5-dc71f66011d6\") " pod="openstack/neutron-db-create-lkhcw" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.570826 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-config-data\") pod \"keystone-db-sync-2zxjh\" (UID: \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\") " pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.570874 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b522428-69eb-4f45-97c5-dc71f66011d6-operator-scripts\") pod \"neutron-db-create-lkhcw\" (UID: \"2b522428-69eb-4f45-97c5-dc71f66011d6\") " pod="openstack/neutron-db-create-lkhcw" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.570974 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqxxd\" (UniqueName: \"kubernetes.io/projected/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-kube-api-access-jqxxd\") pod \"keystone-db-sync-2zxjh\" (UID: \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\") " pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.571002 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-combined-ca-bundle\") pod \"keystone-db-sync-2zxjh\" (UID: \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\") " pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.577150 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-combined-ca-bundle\") pod \"keystone-db-sync-2zxjh\" (UID: \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\") " pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.578706 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-config-data\") pod \"keystone-db-sync-2zxjh\" (UID: \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\") " pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.588003 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-1612-account-create-update-2thjx"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.589107 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1612-account-create-update-2thjx" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.594801 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.607521 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqxxd\" (UniqueName: \"kubernetes.io/projected/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-kube-api-access-jqxxd\") pod \"keystone-db-sync-2zxjh\" (UID: \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\") " pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.611435 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1612-account-create-update-2thjx"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.689099 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.697056 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4grmj\" (UniqueName: \"kubernetes.io/projected/d158e1ca-8b81-42bd-ad5e-69ae4017ad92-kube-api-access-4grmj\") pod \"neutron-1612-account-create-update-2thjx\" (UID: \"d158e1ca-8b81-42bd-ad5e-69ae4017ad92\") " pod="openstack/neutron-1612-account-create-update-2thjx" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.697111 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7trp8\" (UniqueName: \"kubernetes.io/projected/2b522428-69eb-4f45-97c5-dc71f66011d6-kube-api-access-7trp8\") pod \"neutron-db-create-lkhcw\" (UID: \"2b522428-69eb-4f45-97c5-dc71f66011d6\") " pod="openstack/neutron-db-create-lkhcw" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.697156 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d158e1ca-8b81-42bd-ad5e-69ae4017ad92-operator-scripts\") pod \"neutron-1612-account-create-update-2thjx\" (UID: \"d158e1ca-8b81-42bd-ad5e-69ae4017ad92\") " pod="openstack/neutron-1612-account-create-update-2thjx" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.697180 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b522428-69eb-4f45-97c5-dc71f66011d6-operator-scripts\") pod \"neutron-db-create-lkhcw\" (UID: \"2b522428-69eb-4f45-97c5-dc71f66011d6\") " pod="openstack/neutron-db-create-lkhcw" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.697859 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b522428-69eb-4f45-97c5-dc71f66011d6-operator-scripts\") pod \"neutron-db-create-lkhcw\" (UID: \"2b522428-69eb-4f45-97c5-dc71f66011d6\") " pod="openstack/neutron-db-create-lkhcw" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.726487 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7trp8\" (UniqueName: \"kubernetes.io/projected/2b522428-69eb-4f45-97c5-dc71f66011d6-kube-api-access-7trp8\") pod \"neutron-db-create-lkhcw\" (UID: \"2b522428-69eb-4f45-97c5-dc71f66011d6\") " pod="openstack/neutron-db-create-lkhcw" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.786309 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="235cefe7-256c-4e52-a52d-be9fefef4b4b" path="/var/lib/kubelet/pods/235cefe7-256c-4e52-a52d-be9fefef4b4b/volumes" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.787735 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-r5d7p"] Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.800236 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4grmj\" (UniqueName: \"kubernetes.io/projected/d158e1ca-8b81-42bd-ad5e-69ae4017ad92-kube-api-access-4grmj\") pod \"neutron-1612-account-create-update-2thjx\" (UID: \"d158e1ca-8b81-42bd-ad5e-69ae4017ad92\") " pod="openstack/neutron-1612-account-create-update-2thjx" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.800320 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d158e1ca-8b81-42bd-ad5e-69ae4017ad92-operator-scripts\") pod \"neutron-1612-account-create-update-2thjx\" (UID: \"d158e1ca-8b81-42bd-ad5e-69ae4017ad92\") " pod="openstack/neutron-1612-account-create-update-2thjx" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.801028 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d158e1ca-8b81-42bd-ad5e-69ae4017ad92-operator-scripts\") pod \"neutron-1612-account-create-update-2thjx\" (UID: \"d158e1ca-8b81-42bd-ad5e-69ae4017ad92\") " pod="openstack/neutron-1612-account-create-update-2thjx" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.830859 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lkhcw" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.847278 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4grmj\" (UniqueName: \"kubernetes.io/projected/d158e1ca-8b81-42bd-ad5e-69ae4017ad92-kube-api-access-4grmj\") pod \"neutron-1612-account-create-update-2thjx\" (UID: \"d158e1ca-8b81-42bd-ad5e-69ae4017ad92\") " pod="openstack/neutron-1612-account-create-update-2thjx" Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.862794 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grlfx-config-x47zp" event={"ID":"6e6912f1-f5de-4e40-8dba-4e2d4faee091","Type":"ContainerStarted","Data":"81674409fb142dd84460e4c49e91b6bde43e8b9b5c61581ff4374ee454bbc481"} Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.866755 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-r5d7p" event={"ID":"f25be051-f6a0-486d-a204-59b3f33af8c8","Type":"ContainerStarted","Data":"9ccc7f49a70aae8c3e647d21b6b0f9253849f6268711de5a432780d64265f77c"} Jan 31 05:39:59 crc kubenswrapper[5050]: I0131 05:39:59.917785 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1612-account-create-update-2thjx" Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.150068 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jpmz7"] Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.279418 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b1ec-account-create-update-p5wzj"] Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.464734 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2zxjh"] Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.471470 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8fca-account-create-update-zgmfr"] Jan 31 05:40:00 crc kubenswrapper[5050]: W0131 05:40:00.520216 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1b21e3d_6de3_4be1_af37_d2fcf6d5521d.slice/crio-9093baa4677500905a2a4fc3adca883930a0fa31df4dbf9d84461b12140f5028 WatchSource:0}: Error finding container 9093baa4677500905a2a4fc3adca883930a0fa31df4dbf9d84461b12140f5028: Status 404 returned error can't find the container with id 9093baa4677500905a2a4fc3adca883930a0fa31df4dbf9d84461b12140f5028 Jan 31 05:40:00 crc kubenswrapper[5050]: W0131 05:40:00.522771 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ea6a094_f9f7_4626_9241_c23f2d2685d7.slice/crio-a1d3fc9216091eded51574bd034ca1d3cb85d996bb9c196ece83cb07b10c312a WatchSource:0}: Error finding container a1d3fc9216091eded51574bd034ca1d3cb85d996bb9c196ece83cb07b10c312a: Status 404 returned error can't find the container with id a1d3fc9216091eded51574bd034ca1d3cb85d996bb9c196ece83cb07b10c312a Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.562797 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lkhcw"] Jan 31 05:40:00 crc kubenswrapper[5050]: W0131 05:40:00.570780 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b522428_69eb_4f45_97c5_dc71f66011d6.slice/crio-4655c57b5d8a2cd67337fd00dcebb0069c84de4af69ecd118179c199318fa9d6 WatchSource:0}: Error finding container 4655c57b5d8a2cd67337fd00dcebb0069c84de4af69ecd118179c199318fa9d6: Status 404 returned error can't find the container with id 4655c57b5d8a2cd67337fd00dcebb0069c84de4af69ecd118179c199318fa9d6 Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.644362 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1612-account-create-update-2thjx"] Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.875600 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2zxjh" event={"ID":"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d","Type":"ContainerStarted","Data":"9093baa4677500905a2a4fc3adca883930a0fa31df4dbf9d84461b12140f5028"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.877540 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1612-account-create-update-2thjx" event={"ID":"d158e1ca-8b81-42bd-ad5e-69ae4017ad92","Type":"ContainerStarted","Data":"2f9ca96ac33593cb6ba0adfde7996b2145361062c1a2eda834e08003c9ed6009"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.877572 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1612-account-create-update-2thjx" event={"ID":"d158e1ca-8b81-42bd-ad5e-69ae4017ad92","Type":"ContainerStarted","Data":"b7f2db5648d18e9cc9020cf96f3599d46e09a8ba7026ec3ba74ff73dd7891e29"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.880078 5050 generic.go:334] "Generic (PLEG): container finished" podID="f25be051-f6a0-486d-a204-59b3f33af8c8" containerID="e85f0727ea097ff70f2df6f83d977da5977c96217cf83547b639b74c1b7fc0ca" exitCode=0 Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.880139 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-r5d7p" event={"ID":"f25be051-f6a0-486d-a204-59b3f33af8c8","Type":"ContainerDied","Data":"e85f0727ea097ff70f2df6f83d977da5977c96217cf83547b639b74c1b7fc0ca"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.882676 5050 generic.go:334] "Generic (PLEG): container finished" podID="5dbc6186-a3de-418c-a213-3064164fc5bc" containerID="0c029cccd8c791b2721eb0632b396ef508c2df08924ce64c3fbf53916cdec762" exitCode=0 Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.882773 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b1ec-account-create-update-p5wzj" event={"ID":"5dbc6186-a3de-418c-a213-3064164fc5bc","Type":"ContainerDied","Data":"0c029cccd8c791b2721eb0632b396ef508c2df08924ce64c3fbf53916cdec762"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.882831 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b1ec-account-create-update-p5wzj" event={"ID":"5dbc6186-a3de-418c-a213-3064164fc5bc","Type":"ContainerStarted","Data":"fd03115fb0bdaa6daaa12cdfb5970a6d9bd53ff643e901f09c4e152be26fb179"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.884450 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lkhcw" event={"ID":"2b522428-69eb-4f45-97c5-dc71f66011d6","Type":"ContainerStarted","Data":"02ebf91af5cb0526a93b15c60b199e280aaa3fcc610a5c5b508788340985885d"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.884480 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lkhcw" event={"ID":"2b522428-69eb-4f45-97c5-dc71f66011d6","Type":"ContainerStarted","Data":"4655c57b5d8a2cd67337fd00dcebb0069c84de4af69ecd118179c199318fa9d6"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.886288 5050 generic.go:334] "Generic (PLEG): container finished" podID="6e6912f1-f5de-4e40-8dba-4e2d4faee091" containerID="95b0d44235c9900d92630ad20c3542a014ae2cfb79568d614357eaf25852048e" exitCode=0 Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.886321 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grlfx-config-x47zp" event={"ID":"6e6912f1-f5de-4e40-8dba-4e2d4faee091","Type":"ContainerDied","Data":"95b0d44235c9900d92630ad20c3542a014ae2cfb79568d614357eaf25852048e"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.887742 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8fca-account-create-update-zgmfr" event={"ID":"0ea6a094-f9f7-4626-9241-c23f2d2685d7","Type":"ContainerStarted","Data":"1012deac64cfedf3ca4c8d4b3f5303f269d7bc20e6ca0325c5758dfc0067ac56"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.887769 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8fca-account-create-update-zgmfr" event={"ID":"0ea6a094-f9f7-4626-9241-c23f2d2685d7","Type":"ContainerStarted","Data":"a1d3fc9216091eded51574bd034ca1d3cb85d996bb9c196ece83cb07b10c312a"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.890562 5050 generic.go:334] "Generic (PLEG): container finished" podID="e81c149b-a523-42c5-8d6b-2eefde46201a" containerID="4b1a2791792810a1090871ed4f547300dcd528be44fda0466d851f237839eabc" exitCode=0 Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.890652 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jpmz7" event={"ID":"e81c149b-a523-42c5-8d6b-2eefde46201a","Type":"ContainerDied","Data":"4b1a2791792810a1090871ed4f547300dcd528be44fda0466d851f237839eabc"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.890838 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jpmz7" event={"ID":"e81c149b-a523-42c5-8d6b-2eefde46201a","Type":"ContainerStarted","Data":"6909a4f86c08cd18f563e4e5a1334e42aa57db9ab2dea46dfb9d77778eeca33f"} Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.901619 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-1612-account-create-update-2thjx" podStartSLOduration=1.90160258 podStartE2EDuration="1.90160258s" podCreationTimestamp="2026-01-31 05:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:40:00.894889616 +0000 UTC m=+1125.944051212" watchObservedRunningTime="2026-01-31 05:40:00.90160258 +0000 UTC m=+1125.950764176" Jan 31 05:40:00 crc kubenswrapper[5050]: I0131 05:40:00.996127 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-lkhcw" podStartSLOduration=1.996109576 podStartE2EDuration="1.996109576s" podCreationTimestamp="2026-01-31 05:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:40:00.995922271 +0000 UTC m=+1126.045083877" watchObservedRunningTime="2026-01-31 05:40:00.996109576 +0000 UTC m=+1126.045271172" Jan 31 05:40:01 crc kubenswrapper[5050]: I0131 05:40:01.010249 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-8fca-account-create-update-zgmfr" podStartSLOduration=2.01022904 podStartE2EDuration="2.01022904s" podCreationTimestamp="2026-01-31 05:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:40:01.008334872 +0000 UTC m=+1126.057496468" watchObservedRunningTime="2026-01-31 05:40:01.01022904 +0000 UTC m=+1126.059390636" Jan 31 05:40:01 crc kubenswrapper[5050]: I0131 05:40:01.904150 5050 generic.go:334] "Generic (PLEG): container finished" podID="d158e1ca-8b81-42bd-ad5e-69ae4017ad92" containerID="2f9ca96ac33593cb6ba0adfde7996b2145361062c1a2eda834e08003c9ed6009" exitCode=0 Jan 31 05:40:01 crc kubenswrapper[5050]: I0131 05:40:01.904262 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1612-account-create-update-2thjx" event={"ID":"d158e1ca-8b81-42bd-ad5e-69ae4017ad92","Type":"ContainerDied","Data":"2f9ca96ac33593cb6ba0adfde7996b2145361062c1a2eda834e08003c9ed6009"} Jan 31 05:40:01 crc kubenswrapper[5050]: I0131 05:40:01.909528 5050 generic.go:334] "Generic (PLEG): container finished" podID="2b522428-69eb-4f45-97c5-dc71f66011d6" containerID="02ebf91af5cb0526a93b15c60b199e280aaa3fcc610a5c5b508788340985885d" exitCode=0 Jan 31 05:40:01 crc kubenswrapper[5050]: I0131 05:40:01.909601 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lkhcw" event={"ID":"2b522428-69eb-4f45-97c5-dc71f66011d6","Type":"ContainerDied","Data":"02ebf91af5cb0526a93b15c60b199e280aaa3fcc610a5c5b508788340985885d"} Jan 31 05:40:01 crc kubenswrapper[5050]: I0131 05:40:01.911622 5050 generic.go:334] "Generic (PLEG): container finished" podID="0ea6a094-f9f7-4626-9241-c23f2d2685d7" containerID="1012deac64cfedf3ca4c8d4b3f5303f269d7bc20e6ca0325c5758dfc0067ac56" exitCode=0 Jan 31 05:40:01 crc kubenswrapper[5050]: I0131 05:40:01.911827 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8fca-account-create-update-zgmfr" event={"ID":"0ea6a094-f9f7-4626-9241-c23f2d2685d7","Type":"ContainerDied","Data":"1012deac64cfedf3ca4c8d4b3f5303f269d7bc20e6ca0325c5758dfc0067ac56"} Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.281805 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jpmz7" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.451244 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e81c149b-a523-42c5-8d6b-2eefde46201a-operator-scripts\") pod \"e81c149b-a523-42c5-8d6b-2eefde46201a\" (UID: \"e81c149b-a523-42c5-8d6b-2eefde46201a\") " Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.452136 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e81c149b-a523-42c5-8d6b-2eefde46201a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e81c149b-a523-42c5-8d6b-2eefde46201a" (UID: "e81c149b-a523-42c5-8d6b-2eefde46201a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.452354 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b5l7\" (UniqueName: \"kubernetes.io/projected/e81c149b-a523-42c5-8d6b-2eefde46201a-kube-api-access-4b5l7\") pod \"e81c149b-a523-42c5-8d6b-2eefde46201a\" (UID: \"e81c149b-a523-42c5-8d6b-2eefde46201a\") " Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.453153 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e81c149b-a523-42c5-8d6b-2eefde46201a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.458512 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.458892 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e81c149b-a523-42c5-8d6b-2eefde46201a-kube-api-access-4b5l7" (OuterVolumeSpecName: "kube-api-access-4b5l7") pod "e81c149b-a523-42c5-8d6b-2eefde46201a" (UID: "e81c149b-a523-42c5-8d6b-2eefde46201a"). InnerVolumeSpecName "kube-api-access-4b5l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.500726 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-r5d7p" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.511064 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b1ec-account-create-update-p5wzj" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.553909 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-run\") pod \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.553979 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-run-ovn\") pod \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.554004 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e6912f1-f5de-4e40-8dba-4e2d4faee091-scripts\") pod \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.554020 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-run" (OuterVolumeSpecName: "var-run") pod "6e6912f1-f5de-4e40-8dba-4e2d4faee091" (UID: "6e6912f1-f5de-4e40-8dba-4e2d4faee091"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.554045 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6e6912f1-f5de-4e40-8dba-4e2d4faee091" (UID: "6e6912f1-f5de-4e40-8dba-4e2d4faee091"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.554065 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg9fz\" (UniqueName: \"kubernetes.io/projected/6e6912f1-f5de-4e40-8dba-4e2d4faee091-kube-api-access-xg9fz\") pod \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.554085 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-log-ovn\") pod \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.554144 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6912f1-f5de-4e40-8dba-4e2d4faee091-additional-scripts\") pod \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\" (UID: \"6e6912f1-f5de-4e40-8dba-4e2d4faee091\") " Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.554537 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b5l7\" (UniqueName: \"kubernetes.io/projected/e81c149b-a523-42c5-8d6b-2eefde46201a-kube-api-access-4b5l7\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.554554 5050 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.554564 5050 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.554561 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6e6912f1-f5de-4e40-8dba-4e2d4faee091" (UID: "6e6912f1-f5de-4e40-8dba-4e2d4faee091"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.555087 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e6912f1-f5de-4e40-8dba-4e2d4faee091-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6e6912f1-f5de-4e40-8dba-4e2d4faee091" (UID: "6e6912f1-f5de-4e40-8dba-4e2d4faee091"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.555363 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e6912f1-f5de-4e40-8dba-4e2d4faee091-scripts" (OuterVolumeSpecName: "scripts") pod "6e6912f1-f5de-4e40-8dba-4e2d4faee091" (UID: "6e6912f1-f5de-4e40-8dba-4e2d4faee091"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.558532 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6912f1-f5de-4e40-8dba-4e2d4faee091-kube-api-access-xg9fz" (OuterVolumeSpecName: "kube-api-access-xg9fz") pod "6e6912f1-f5de-4e40-8dba-4e2d4faee091" (UID: "6e6912f1-f5de-4e40-8dba-4e2d4faee091"). InnerVolumeSpecName "kube-api-access-xg9fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.655164 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dbc6186-a3de-418c-a213-3064164fc5bc-operator-scripts\") pod \"5dbc6186-a3de-418c-a213-3064164fc5bc\" (UID: \"5dbc6186-a3de-418c-a213-3064164fc5bc\") " Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.655211 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5rgp\" (UniqueName: \"kubernetes.io/projected/5dbc6186-a3de-418c-a213-3064164fc5bc-kube-api-access-c5rgp\") pod \"5dbc6186-a3de-418c-a213-3064164fc5bc\" (UID: \"5dbc6186-a3de-418c-a213-3064164fc5bc\") " Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.655253 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c72l\" (UniqueName: \"kubernetes.io/projected/f25be051-f6a0-486d-a204-59b3f33af8c8-kube-api-access-9c72l\") pod \"f25be051-f6a0-486d-a204-59b3f33af8c8\" (UID: \"f25be051-f6a0-486d-a204-59b3f33af8c8\") " Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.655345 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f25be051-f6a0-486d-a204-59b3f33af8c8-operator-scripts\") pod \"f25be051-f6a0-486d-a204-59b3f33af8c8\" (UID: \"f25be051-f6a0-486d-a204-59b3f33af8c8\") " Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.655761 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e6912f1-f5de-4e40-8dba-4e2d4faee091-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.655784 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg9fz\" (UniqueName: \"kubernetes.io/projected/6e6912f1-f5de-4e40-8dba-4e2d4faee091-kube-api-access-xg9fz\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.655797 5050 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6e6912f1-f5de-4e40-8dba-4e2d4faee091-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.655809 5050 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6912f1-f5de-4e40-8dba-4e2d4faee091-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.656209 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dbc6186-a3de-418c-a213-3064164fc5bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5dbc6186-a3de-418c-a213-3064164fc5bc" (UID: "5dbc6186-a3de-418c-a213-3064164fc5bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.656264 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f25be051-f6a0-486d-a204-59b3f33af8c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f25be051-f6a0-486d-a204-59b3f33af8c8" (UID: "f25be051-f6a0-486d-a204-59b3f33af8c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.658588 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25be051-f6a0-486d-a204-59b3f33af8c8-kube-api-access-9c72l" (OuterVolumeSpecName: "kube-api-access-9c72l") pod "f25be051-f6a0-486d-a204-59b3f33af8c8" (UID: "f25be051-f6a0-486d-a204-59b3f33af8c8"). InnerVolumeSpecName "kube-api-access-9c72l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.659177 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dbc6186-a3de-418c-a213-3064164fc5bc-kube-api-access-c5rgp" (OuterVolumeSpecName: "kube-api-access-c5rgp") pod "5dbc6186-a3de-418c-a213-3064164fc5bc" (UID: "5dbc6186-a3de-418c-a213-3064164fc5bc"). InnerVolumeSpecName "kube-api-access-c5rgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.757375 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dbc6186-a3de-418c-a213-3064164fc5bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.757410 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5rgp\" (UniqueName: \"kubernetes.io/projected/5dbc6186-a3de-418c-a213-3064164fc5bc-kube-api-access-c5rgp\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.757422 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c72l\" (UniqueName: \"kubernetes.io/projected/f25be051-f6a0-486d-a204-59b3f33af8c8-kube-api-access-9c72l\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.757432 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f25be051-f6a0-486d-a204-59b3f33af8c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.919985 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grlfx-config-x47zp" event={"ID":"6e6912f1-f5de-4e40-8dba-4e2d4faee091","Type":"ContainerDied","Data":"81674409fb142dd84460e4c49e91b6bde43e8b9b5c61581ff4374ee454bbc481"} Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.920020 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81674409fb142dd84460e4c49e91b6bde43e8b9b5c61581ff4374ee454bbc481" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.920074 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grlfx-config-x47zp" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.929757 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jpmz7" event={"ID":"e81c149b-a523-42c5-8d6b-2eefde46201a","Type":"ContainerDied","Data":"6909a4f86c08cd18f563e4e5a1334e42aa57db9ab2dea46dfb9d77778eeca33f"} Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.929786 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6909a4f86c08cd18f563e4e5a1334e42aa57db9ab2dea46dfb9d77778eeca33f" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.929830 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jpmz7" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.932053 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-r5d7p" event={"ID":"f25be051-f6a0-486d-a204-59b3f33af8c8","Type":"ContainerDied","Data":"9ccc7f49a70aae8c3e647d21b6b0f9253849f6268711de5a432780d64265f77c"} Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.932116 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ccc7f49a70aae8c3e647d21b6b0f9253849f6268711de5a432780d64265f77c" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.932115 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-r5d7p" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.933462 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b1ec-account-create-update-p5wzj" event={"ID":"5dbc6186-a3de-418c-a213-3064164fc5bc","Type":"ContainerDied","Data":"fd03115fb0bdaa6daaa12cdfb5970a6d9bd53ff643e901f09c4e152be26fb179"} Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.933520 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd03115fb0bdaa6daaa12cdfb5970a6d9bd53ff643e901f09c4e152be26fb179" Jan 31 05:40:02 crc kubenswrapper[5050]: I0131 05:40:02.933613 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b1ec-account-create-update-p5wzj" Jan 31 05:40:03 crc kubenswrapper[5050]: I0131 05:40:03.598109 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-grlfx-config-x47zp"] Jan 31 05:40:03 crc kubenswrapper[5050]: I0131 05:40:03.610653 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-grlfx-config-x47zp"] Jan 31 05:40:03 crc kubenswrapper[5050]: I0131 05:40:03.755673 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e6912f1-f5de-4e40-8dba-4e2d4faee091" path="/var/lib/kubelet/pods/6e6912f1-f5de-4e40-8dba-4e2d4faee091/volumes" Jan 31 05:40:03 crc kubenswrapper[5050]: I0131 05:40:03.971920 5050 generic.go:334] "Generic (PLEG): container finished" podID="e67e4334-32bb-4e4f-9dad-8209b4e86495" containerID="87e5a11dbf69d0073fb361ff2299b3c44b0d0a301c33e1d1ad00f6b9274ea382" exitCode=0 Jan 31 05:40:03 crc kubenswrapper[5050]: I0131 05:40:03.972115 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b2zd6" event={"ID":"e67e4334-32bb-4e4f-9dad-8209b4e86495","Type":"ContainerDied","Data":"87e5a11dbf69d0073fb361ff2299b3c44b0d0a301c33e1d1ad00f6b9274ea382"} Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.675052 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8fca-account-create-update-zgmfr" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.707978 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lkhcw" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.736635 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b2zd6" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.747582 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1612-account-create-update-2thjx" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.826677 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkf2l\" (UniqueName: \"kubernetes.io/projected/0ea6a094-f9f7-4626-9241-c23f2d2685d7-kube-api-access-xkf2l\") pod \"0ea6a094-f9f7-4626-9241-c23f2d2685d7\" (UID: \"0ea6a094-f9f7-4626-9241-c23f2d2685d7\") " Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.826737 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d158e1ca-8b81-42bd-ad5e-69ae4017ad92-operator-scripts\") pod \"d158e1ca-8b81-42bd-ad5e-69ae4017ad92\" (UID: \"d158e1ca-8b81-42bd-ad5e-69ae4017ad92\") " Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.826768 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4grmj\" (UniqueName: \"kubernetes.io/projected/d158e1ca-8b81-42bd-ad5e-69ae4017ad92-kube-api-access-4grmj\") pod \"d158e1ca-8b81-42bd-ad5e-69ae4017ad92\" (UID: \"d158e1ca-8b81-42bd-ad5e-69ae4017ad92\") " Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.826802 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6b2s\" (UniqueName: \"kubernetes.io/projected/e67e4334-32bb-4e4f-9dad-8209b4e86495-kube-api-access-t6b2s\") pod \"e67e4334-32bb-4e4f-9dad-8209b4e86495\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.826824 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-combined-ca-bundle\") pod \"e67e4334-32bb-4e4f-9dad-8209b4e86495\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.826856 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea6a094-f9f7-4626-9241-c23f2d2685d7-operator-scripts\") pod \"0ea6a094-f9f7-4626-9241-c23f2d2685d7\" (UID: \"0ea6a094-f9f7-4626-9241-c23f2d2685d7\") " Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.826894 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b522428-69eb-4f45-97c5-dc71f66011d6-operator-scripts\") pod \"2b522428-69eb-4f45-97c5-dc71f66011d6\" (UID: \"2b522428-69eb-4f45-97c5-dc71f66011d6\") " Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.826939 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-db-sync-config-data\") pod \"e67e4334-32bb-4e4f-9dad-8209b4e86495\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.827012 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7trp8\" (UniqueName: \"kubernetes.io/projected/2b522428-69eb-4f45-97c5-dc71f66011d6-kube-api-access-7trp8\") pod \"2b522428-69eb-4f45-97c5-dc71f66011d6\" (UID: \"2b522428-69eb-4f45-97c5-dc71f66011d6\") " Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.827067 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-config-data\") pod \"e67e4334-32bb-4e4f-9dad-8209b4e86495\" (UID: \"e67e4334-32bb-4e4f-9dad-8209b4e86495\") " Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.833870 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea6a094-f9f7-4626-9241-c23f2d2685d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ea6a094-f9f7-4626-9241-c23f2d2685d7" (UID: "0ea6a094-f9f7-4626-9241-c23f2d2685d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.834197 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b522428-69eb-4f45-97c5-dc71f66011d6-kube-api-access-7trp8" (OuterVolumeSpecName: "kube-api-access-7trp8") pod "2b522428-69eb-4f45-97c5-dc71f66011d6" (UID: "2b522428-69eb-4f45-97c5-dc71f66011d6"). InnerVolumeSpecName "kube-api-access-7trp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.834361 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b522428-69eb-4f45-97c5-dc71f66011d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b522428-69eb-4f45-97c5-dc71f66011d6" (UID: "2b522428-69eb-4f45-97c5-dc71f66011d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.837464 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d158e1ca-8b81-42bd-ad5e-69ae4017ad92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d158e1ca-8b81-42bd-ad5e-69ae4017ad92" (UID: "d158e1ca-8b81-42bd-ad5e-69ae4017ad92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.838466 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e67e4334-32bb-4e4f-9dad-8209b4e86495" (UID: "e67e4334-32bb-4e4f-9dad-8209b4e86495"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.838810 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d158e1ca-8b81-42bd-ad5e-69ae4017ad92-kube-api-access-4grmj" (OuterVolumeSpecName: "kube-api-access-4grmj") pod "d158e1ca-8b81-42bd-ad5e-69ae4017ad92" (UID: "d158e1ca-8b81-42bd-ad5e-69ae4017ad92"). InnerVolumeSpecName "kube-api-access-4grmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.839612 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea6a094-f9f7-4626-9241-c23f2d2685d7-kube-api-access-xkf2l" (OuterVolumeSpecName: "kube-api-access-xkf2l") pod "0ea6a094-f9f7-4626-9241-c23f2d2685d7" (UID: "0ea6a094-f9f7-4626-9241-c23f2d2685d7"). InnerVolumeSpecName "kube-api-access-xkf2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.843219 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67e4334-32bb-4e4f-9dad-8209b4e86495-kube-api-access-t6b2s" (OuterVolumeSpecName: "kube-api-access-t6b2s") pod "e67e4334-32bb-4e4f-9dad-8209b4e86495" (UID: "e67e4334-32bb-4e4f-9dad-8209b4e86495"). InnerVolumeSpecName "kube-api-access-t6b2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.870237 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-config-data" (OuterVolumeSpecName: "config-data") pod "e67e4334-32bb-4e4f-9dad-8209b4e86495" (UID: "e67e4334-32bb-4e4f-9dad-8209b4e86495"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.871799 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e67e4334-32bb-4e4f-9dad-8209b4e86495" (UID: "e67e4334-32bb-4e4f-9dad-8209b4e86495"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.929908 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7trp8\" (UniqueName: \"kubernetes.io/projected/2b522428-69eb-4f45-97c5-dc71f66011d6-kube-api-access-7trp8\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.929939 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.929963 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkf2l\" (UniqueName: \"kubernetes.io/projected/0ea6a094-f9f7-4626-9241-c23f2d2685d7-kube-api-access-xkf2l\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.929973 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d158e1ca-8b81-42bd-ad5e-69ae4017ad92-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.929981 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4grmj\" (UniqueName: \"kubernetes.io/projected/d158e1ca-8b81-42bd-ad5e-69ae4017ad92-kube-api-access-4grmj\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.929989 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6b2s\" (UniqueName: \"kubernetes.io/projected/e67e4334-32bb-4e4f-9dad-8209b4e86495-kube-api-access-t6b2s\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.929997 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.930006 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea6a094-f9f7-4626-9241-c23f2d2685d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.930014 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b522428-69eb-4f45-97c5-dc71f66011d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.930023 5050 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e67e4334-32bb-4e4f-9dad-8209b4e86495-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.988337 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1612-account-create-update-2thjx" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.997730 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1612-account-create-update-2thjx" event={"ID":"d158e1ca-8b81-42bd-ad5e-69ae4017ad92","Type":"ContainerDied","Data":"b7f2db5648d18e9cc9020cf96f3599d46e09a8ba7026ec3ba74ff73dd7891e29"} Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.997855 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7f2db5648d18e9cc9020cf96f3599d46e09a8ba7026ec3ba74ff73dd7891e29" Jan 31 05:40:05 crc kubenswrapper[5050]: I0131 05:40:05.998190 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2zxjh" event={"ID":"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d","Type":"ContainerStarted","Data":"b54621e91c67e160066fff6dff4ebb21dfe08c5d2bbe064d9aa0deda62d36cd4"} Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.000081 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lkhcw" event={"ID":"2b522428-69eb-4f45-97c5-dc71f66011d6","Type":"ContainerDied","Data":"4655c57b5d8a2cd67337fd00dcebb0069c84de4af69ecd118179c199318fa9d6"} Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.000144 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4655c57b5d8a2cd67337fd00dcebb0069c84de4af69ecd118179c199318fa9d6" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.000393 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lkhcw" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.007207 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8fca-account-create-update-zgmfr" event={"ID":"0ea6a094-f9f7-4626-9241-c23f2d2685d7","Type":"ContainerDied","Data":"a1d3fc9216091eded51574bd034ca1d3cb85d996bb9c196ece83cb07b10c312a"} Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.007271 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1d3fc9216091eded51574bd034ca1d3cb85d996bb9c196ece83cb07b10c312a" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.007222 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8fca-account-create-update-zgmfr" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.010568 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-2zxjh" podStartSLOduration=1.991722043 podStartE2EDuration="7.01053806s" podCreationTimestamp="2026-01-31 05:39:59 +0000 UTC" firstStartedPulling="2026-01-31 05:40:00.522593716 +0000 UTC m=+1125.571755322" lastFinishedPulling="2026-01-31 05:40:05.541409703 +0000 UTC m=+1130.590571339" observedRunningTime="2026-01-31 05:40:06.007265675 +0000 UTC m=+1131.056427271" watchObservedRunningTime="2026-01-31 05:40:06.01053806 +0000 UTC m=+1131.059699706" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.011979 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b2zd6" event={"ID":"e67e4334-32bb-4e4f-9dad-8209b4e86495","Type":"ContainerDied","Data":"eebd4bd22e196726abec9922b92d5ab5dcb6a76dbe029917c672c8678ec46a13"} Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.012022 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eebd4bd22e196726abec9922b92d5ab5dcb6a76dbe029917c672c8678ec46a13" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.012091 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b2zd6" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.405186 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-7h5lq"] Jan 31 05:40:06 crc kubenswrapper[5050]: E0131 05:40:06.405985 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea6a094-f9f7-4626-9241-c23f2d2685d7" containerName="mariadb-account-create-update" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406008 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea6a094-f9f7-4626-9241-c23f2d2685d7" containerName="mariadb-account-create-update" Jan 31 05:40:06 crc kubenswrapper[5050]: E0131 05:40:06.406030 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dbc6186-a3de-418c-a213-3064164fc5bc" containerName="mariadb-account-create-update" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406040 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dbc6186-a3de-418c-a213-3064164fc5bc" containerName="mariadb-account-create-update" Jan 31 05:40:06 crc kubenswrapper[5050]: E0131 05:40:06.406058 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67e4334-32bb-4e4f-9dad-8209b4e86495" containerName="glance-db-sync" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406066 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67e4334-32bb-4e4f-9dad-8209b4e86495" containerName="glance-db-sync" Jan 31 05:40:06 crc kubenswrapper[5050]: E0131 05:40:06.406080 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6912f1-f5de-4e40-8dba-4e2d4faee091" containerName="ovn-config" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406090 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6912f1-f5de-4e40-8dba-4e2d4faee091" containerName="ovn-config" Jan 31 05:40:06 crc kubenswrapper[5050]: E0131 05:40:06.406105 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158e1ca-8b81-42bd-ad5e-69ae4017ad92" containerName="mariadb-account-create-update" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406114 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158e1ca-8b81-42bd-ad5e-69ae4017ad92" containerName="mariadb-account-create-update" Jan 31 05:40:06 crc kubenswrapper[5050]: E0131 05:40:06.406132 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e81c149b-a523-42c5-8d6b-2eefde46201a" containerName="mariadb-database-create" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406140 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e81c149b-a523-42c5-8d6b-2eefde46201a" containerName="mariadb-database-create" Jan 31 05:40:06 crc kubenswrapper[5050]: E0131 05:40:06.406163 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25be051-f6a0-486d-a204-59b3f33af8c8" containerName="mariadb-database-create" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406171 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25be051-f6a0-486d-a204-59b3f33af8c8" containerName="mariadb-database-create" Jan 31 05:40:06 crc kubenswrapper[5050]: E0131 05:40:06.406201 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b522428-69eb-4f45-97c5-dc71f66011d6" containerName="mariadb-database-create" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406211 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b522428-69eb-4f45-97c5-dc71f66011d6" containerName="mariadb-database-create" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406585 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158e1ca-8b81-42bd-ad5e-69ae4017ad92" containerName="mariadb-account-create-update" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406625 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67e4334-32bb-4e4f-9dad-8209b4e86495" containerName="glance-db-sync" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406644 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e81c149b-a523-42c5-8d6b-2eefde46201a" containerName="mariadb-database-create" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406655 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25be051-f6a0-486d-a204-59b3f33af8c8" containerName="mariadb-database-create" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406671 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea6a094-f9f7-4626-9241-c23f2d2685d7" containerName="mariadb-account-create-update" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406680 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e6912f1-f5de-4e40-8dba-4e2d4faee091" containerName="ovn-config" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406692 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dbc6186-a3de-418c-a213-3064164fc5bc" containerName="mariadb-account-create-update" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.406709 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b522428-69eb-4f45-97c5-dc71f66011d6" containerName="mariadb-database-create" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.426324 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.442512 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-7h5lq"] Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.445735 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7f92\" (UniqueName: \"kubernetes.io/projected/29e19125-8e55-4724-bfae-9c1f6e90fbf8-kube-api-access-l7f92\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.445787 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.445821 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-config\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.445859 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.445928 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.546855 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7f92\" (UniqueName: \"kubernetes.io/projected/29e19125-8e55-4724-bfae-9c1f6e90fbf8-kube-api-access-l7f92\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.546907 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.546927 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-config\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.547001 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.547065 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.547920 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.548114 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.548401 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-config\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.548446 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.570942 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7f92\" (UniqueName: \"kubernetes.io/projected/29e19125-8e55-4724-bfae-9c1f6e90fbf8-kube-api-access-l7f92\") pod \"dnsmasq-dns-54f9b7b8d9-7h5lq\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:06 crc kubenswrapper[5050]: I0131 05:40:06.771602 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:07 crc kubenswrapper[5050]: I0131 05:40:07.224695 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-7h5lq"] Jan 31 05:40:08 crc kubenswrapper[5050]: I0131 05:40:08.038078 5050 generic.go:334] "Generic (PLEG): container finished" podID="29e19125-8e55-4724-bfae-9c1f6e90fbf8" containerID="1dc86c871e3e235d26c2acb7b7580190b007a7923135a9957d5982ec8d6fca4c" exitCode=0 Jan 31 05:40:08 crc kubenswrapper[5050]: I0131 05:40:08.038121 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" event={"ID":"29e19125-8e55-4724-bfae-9c1f6e90fbf8","Type":"ContainerDied","Data":"1dc86c871e3e235d26c2acb7b7580190b007a7923135a9957d5982ec8d6fca4c"} Jan 31 05:40:08 crc kubenswrapper[5050]: I0131 05:40:08.038358 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" event={"ID":"29e19125-8e55-4724-bfae-9c1f6e90fbf8","Type":"ContainerStarted","Data":"d1ddf629eca3d6d2e26a5a73259125df5ef433d0262c98d9e4497630730bb650"} Jan 31 05:40:09 crc kubenswrapper[5050]: I0131 05:40:09.047609 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" event={"ID":"29e19125-8e55-4724-bfae-9c1f6e90fbf8","Type":"ContainerStarted","Data":"44c9ed1b3e7936e7eab607d2108e7042503e1ed35da75bc77f5d6a575158c9c8"} Jan 31 05:40:09 crc kubenswrapper[5050]: I0131 05:40:09.047970 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:09 crc kubenswrapper[5050]: I0131 05:40:09.071539 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" podStartSLOduration=3.071521112 podStartE2EDuration="3.071521112s" podCreationTimestamp="2026-01-31 05:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:40:09.064991923 +0000 UTC m=+1134.114153519" watchObservedRunningTime="2026-01-31 05:40:09.071521112 +0000 UTC m=+1134.120682708" Jan 31 05:40:10 crc kubenswrapper[5050]: I0131 05:40:10.056370 5050 generic.go:334] "Generic (PLEG): container finished" podID="d1b21e3d-6de3-4be1-af37-d2fcf6d5521d" containerID="b54621e91c67e160066fff6dff4ebb21dfe08c5d2bbe064d9aa0deda62d36cd4" exitCode=0 Jan 31 05:40:10 crc kubenswrapper[5050]: I0131 05:40:10.056459 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2zxjh" event={"ID":"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d","Type":"ContainerDied","Data":"b54621e91c67e160066fff6dff4ebb21dfe08c5d2bbe064d9aa0deda62d36cd4"} Jan 31 05:40:11 crc kubenswrapper[5050]: I0131 05:40:11.412175 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:40:11 crc kubenswrapper[5050]: I0131 05:40:11.437088 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqxxd\" (UniqueName: \"kubernetes.io/projected/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-kube-api-access-jqxxd\") pod \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\" (UID: \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\") " Jan 31 05:40:11 crc kubenswrapper[5050]: I0131 05:40:11.437267 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-combined-ca-bundle\") pod \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\" (UID: \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\") " Jan 31 05:40:11 crc kubenswrapper[5050]: I0131 05:40:11.437334 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-config-data\") pod \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\" (UID: \"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d\") " Jan 31 05:40:11 crc kubenswrapper[5050]: I0131 05:40:11.450280 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-kube-api-access-jqxxd" (OuterVolumeSpecName: "kube-api-access-jqxxd") pod "d1b21e3d-6de3-4be1-af37-d2fcf6d5521d" (UID: "d1b21e3d-6de3-4be1-af37-d2fcf6d5521d"). InnerVolumeSpecName "kube-api-access-jqxxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:11 crc kubenswrapper[5050]: I0131 05:40:11.475822 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1b21e3d-6de3-4be1-af37-d2fcf6d5521d" (UID: "d1b21e3d-6de3-4be1-af37-d2fcf6d5521d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:40:11 crc kubenswrapper[5050]: I0131 05:40:11.510243 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-config-data" (OuterVolumeSpecName: "config-data") pod "d1b21e3d-6de3-4be1-af37-d2fcf6d5521d" (UID: "d1b21e3d-6de3-4be1-af37-d2fcf6d5521d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:40:11 crc kubenswrapper[5050]: I0131 05:40:11.542227 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqxxd\" (UniqueName: \"kubernetes.io/projected/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-kube-api-access-jqxxd\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:11 crc kubenswrapper[5050]: I0131 05:40:11.542265 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:11 crc kubenswrapper[5050]: I0131 05:40:11.542278 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.082069 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2zxjh" event={"ID":"d1b21e3d-6de3-4be1-af37-d2fcf6d5521d","Type":"ContainerDied","Data":"9093baa4677500905a2a4fc3adca883930a0fa31df4dbf9d84461b12140f5028"} Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.082343 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9093baa4677500905a2a4fc3adca883930a0fa31df4dbf9d84461b12140f5028" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.082176 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2zxjh" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.407686 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xjhh6"] Jan 31 05:40:12 crc kubenswrapper[5050]: E0131 05:40:12.408133 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b21e3d-6de3-4be1-af37-d2fcf6d5521d" containerName="keystone-db-sync" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.408156 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b21e3d-6de3-4be1-af37-d2fcf6d5521d" containerName="keystone-db-sync" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.408355 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b21e3d-6de3-4be1-af37-d2fcf6d5521d" containerName="keystone-db-sync" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.408972 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.411433 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.411640 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.411699 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qqp2b" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.412144 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.412511 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.445328 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xjhh6"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.460141 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-fernet-keys\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.460190 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-credential-keys\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.460212 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-combined-ca-bundle\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.460266 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-config-data\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.460286 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-scripts\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.460324 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfmhz\" (UniqueName: \"kubernetes.io/projected/dd75c006-d92b-4df4-afb7-65de2aca13da-kube-api-access-pfmhz\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.464792 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-7h5lq"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.465036 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" podUID="29e19125-8e55-4724-bfae-9c1f6e90fbf8" containerName="dnsmasq-dns" containerID="cri-o://44c9ed1b3e7936e7eab607d2108e7042503e1ed35da75bc77f5d6a575158c9c8" gracePeriod=10 Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.494984 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-8km8s"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.496680 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.511898 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-8km8s"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.561569 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.561632 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqn5g\" (UniqueName: \"kubernetes.io/projected/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-kube-api-access-bqn5g\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.561665 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfmhz\" (UniqueName: \"kubernetes.io/projected/dd75c006-d92b-4df4-afb7-65de2aca13da-kube-api-access-pfmhz\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.561819 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-config\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.561890 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.561925 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-fernet-keys\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.562004 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-credential-keys\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.562031 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-combined-ca-bundle\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.562092 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-dns-svc\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.562167 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-config-data\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.562200 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-scripts\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.568498 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-scripts\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.574106 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-config-data\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.574837 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-credential-keys\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.586352 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-fernet-keys\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.586849 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-combined-ca-bundle\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.599044 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5gld6"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.600563 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfmhz\" (UniqueName: \"kubernetes.io/projected/dd75c006-d92b-4df4-afb7-65de2aca13da-kube-api-access-pfmhz\") pod \"keystone-bootstrap-xjhh6\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.602815 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.605712 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.605895 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.606084 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-w5fzq" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.615939 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5gld6"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.675670 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-combined-ca-bundle\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.675755 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-config\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.675807 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.675824 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-config-data\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.675884 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-scripts\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.675902 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-db-sync-config-data\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.676007 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-dns-svc\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.676141 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dad1668e-92d0-48a9-9e34-aa95875ce641-etc-machine-id\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.676165 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.676185 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4pnw\" (UniqueName: \"kubernetes.io/projected/dad1668e-92d0-48a9-9e34-aa95875ce641-kube-api-access-l4pnw\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.676213 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqn5g\" (UniqueName: \"kubernetes.io/projected/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-kube-api-access-bqn5g\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.678056 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-config\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.678751 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.679452 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.711074 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.713631 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.716942 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.725777 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.729068 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-dns-svc\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.730733 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.737210 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.776450 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4kpps"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.788871 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-combined-ca-bundle\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.788985 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-config-data\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.789048 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-scripts\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.789082 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-db-sync-config-data\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.789117 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqfz\" (UniqueName: \"kubernetes.io/projected/d053842c-8e88-4a70-b94c-1cd91a50b731-kube-api-access-6rqfz\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.789253 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4kpps" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.789282 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.789366 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-config-data\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.789480 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dad1668e-92d0-48a9-9e34-aa95875ce641-etc-machine-id\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.789507 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d053842c-8e88-4a70-b94c-1cd91a50b731-log-httpd\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.789555 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4pnw\" (UniqueName: \"kubernetes.io/projected/dad1668e-92d0-48a9-9e34-aa95875ce641-kube-api-access-l4pnw\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.789660 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.789698 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-scripts\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.789785 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d053842c-8e88-4a70-b94c-1cd91a50b731-run-httpd\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.797034 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.797267 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ldx7x" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.797484 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.808516 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqn5g\" (UniqueName: \"kubernetes.io/projected/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-kube-api-access-bqn5g\") pod \"dnsmasq-dns-6546db6db7-8km8s\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.808667 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dad1668e-92d0-48a9-9e34-aa95875ce641-etc-machine-id\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.810348 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4kpps"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.822682 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.841131 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-config-data\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.895661 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-combined-ca-bundle\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.895994 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-db-sync-config-data\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.896374 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-scripts\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.896435 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4pnw\" (UniqueName: \"kubernetes.io/projected/dad1668e-92d0-48a9-9e34-aa95875ce641-kube-api-access-l4pnw\") pod \"cinder-db-sync-5gld6\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.897088 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d053842c-8e88-4a70-b94c-1cd91a50b731-log-httpd\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.897149 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9f82d6b-5e75-48cd-b642-55d3fa91f520-config\") pod \"neutron-db-sync-4kpps\" (UID: \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\") " pod="openstack/neutron-db-sync-4kpps" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.897188 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f82d6b-5e75-48cd-b642-55d3fa91f520-combined-ca-bundle\") pod \"neutron-db-sync-4kpps\" (UID: \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\") " pod="openstack/neutron-db-sync-4kpps" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.897239 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.897262 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-scripts\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.897305 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv5w6\" (UniqueName: \"kubernetes.io/projected/c9f82d6b-5e75-48cd-b642-55d3fa91f520-kube-api-access-fv5w6\") pod \"neutron-db-sync-4kpps\" (UID: \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\") " pod="openstack/neutron-db-sync-4kpps" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.897337 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d053842c-8e88-4a70-b94c-1cd91a50b731-run-httpd\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.897414 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rqfz\" (UniqueName: \"kubernetes.io/projected/d053842c-8e88-4a70-b94c-1cd91a50b731-kube-api-access-6rqfz\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.897488 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.897547 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-config-data\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.902696 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d053842c-8e88-4a70-b94c-1cd91a50b731-run-httpd\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.903825 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.906357 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d053842c-8e88-4a70-b94c-1cd91a50b731-log-httpd\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.906566 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.919338 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-config-data\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.926039 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-8km8s"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.926437 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-scripts\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.934244 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-j4ptr"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.935541 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.938332 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.938799 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.939180 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rqfz\" (UniqueName: \"kubernetes.io/projected/d053842c-8e88-4a70-b94c-1cd91a50b731-kube-api-access-6rqfz\") pod \"ceilometer-0\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.944067 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j4ptr"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.945331 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.946200 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cnmmt" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.954343 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-dml6c"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.970045 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-dml6c"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.970149 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.962386 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5gld6" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.976707 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-88mvr"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.978387 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-88mvr" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.989268 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-88mvr"] Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.998811 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 05:40:12 crc kubenswrapper[5050]: I0131 05:40:12.999023 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-884nh" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.002973 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9f82d6b-5e75-48cd-b642-55d3fa91f520-config\") pod \"neutron-db-sync-4kpps\" (UID: \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\") " pod="openstack/neutron-db-sync-4kpps" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.003029 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.003072 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-logs\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.003100 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f82d6b-5e75-48cd-b642-55d3fa91f520-combined-ca-bundle\") pod \"neutron-db-sync-4kpps\" (UID: \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\") " pod="openstack/neutron-db-sync-4kpps" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.003157 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vmxv\" (UniqueName: \"kubernetes.io/projected/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-kube-api-access-8vmxv\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.003176 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv5w6\" (UniqueName: \"kubernetes.io/projected/c9f82d6b-5e75-48cd-b642-55d3fa91f520-kube-api-access-fv5w6\") pod \"neutron-db-sync-4kpps\" (UID: \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\") " pod="openstack/neutron-db-sync-4kpps" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.003192 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zssdp\" (UniqueName: \"kubernetes.io/projected/2d4e46ab-29a5-409d-977b-3c92880d4f62-kube-api-access-zssdp\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.003237 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-config-data\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.003256 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9fb9c4-2743-4932-8605-f9be30344553-combined-ca-bundle\") pod \"barbican-db-sync-88mvr\" (UID: \"4e9fb9c4-2743-4932-8605-f9be30344553\") " pod="openstack/barbican-db-sync-88mvr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.003275 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-scripts\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.003348 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4e9fb9c4-2743-4932-8605-f9be30344553-db-sync-config-data\") pod \"barbican-db-sync-88mvr\" (UID: \"4e9fb9c4-2743-4932-8605-f9be30344553\") " pod="openstack/barbican-db-sync-88mvr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.003375 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.004257 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-combined-ca-bundle\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.004306 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcj7v\" (UniqueName: \"kubernetes.io/projected/4e9fb9c4-2743-4932-8605-f9be30344553-kube-api-access-fcj7v\") pod \"barbican-db-sync-88mvr\" (UID: \"4e9fb9c4-2743-4932-8605-f9be30344553\") " pod="openstack/barbican-db-sync-88mvr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.004326 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.004354 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-config\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.010927 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9f82d6b-5e75-48cd-b642-55d3fa91f520-config\") pod \"neutron-db-sync-4kpps\" (UID: \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\") " pod="openstack/neutron-db-sync-4kpps" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.014597 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f82d6b-5e75-48cd-b642-55d3fa91f520-combined-ca-bundle\") pod \"neutron-db-sync-4kpps\" (UID: \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\") " pod="openstack/neutron-db-sync-4kpps" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.053198 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv5w6\" (UniqueName: \"kubernetes.io/projected/c9f82d6b-5e75-48cd-b642-55d3fa91f520-kube-api-access-fv5w6\") pod \"neutron-db-sync-4kpps\" (UID: \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\") " pod="openstack/neutron-db-sync-4kpps" Jan 31 05:40:13 crc kubenswrapper[5050]: E0131 05:40:13.079100 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29e19125_8e55_4724_bfae_9c1f6e90fbf8.slice/crio-conmon-44c9ed1b3e7936e7eab607d2108e7042503e1ed35da75bc77f5d6a575158c9c8.scope\": RecentStats: unable to find data in memory cache]" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106363 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vmxv\" (UniqueName: \"kubernetes.io/projected/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-kube-api-access-8vmxv\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106420 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zssdp\" (UniqueName: \"kubernetes.io/projected/2d4e46ab-29a5-409d-977b-3c92880d4f62-kube-api-access-zssdp\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106446 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9fb9c4-2743-4932-8605-f9be30344553-combined-ca-bundle\") pod \"barbican-db-sync-88mvr\" (UID: \"4e9fb9c4-2743-4932-8605-f9be30344553\") " pod="openstack/barbican-db-sync-88mvr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106463 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-config-data\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106482 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-scripts\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106523 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4e9fb9c4-2743-4932-8605-f9be30344553-db-sync-config-data\") pod \"barbican-db-sync-88mvr\" (UID: \"4e9fb9c4-2743-4932-8605-f9be30344553\") " pod="openstack/barbican-db-sync-88mvr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106553 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106571 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-combined-ca-bundle\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106595 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcj7v\" (UniqueName: \"kubernetes.io/projected/4e9fb9c4-2743-4932-8605-f9be30344553-kube-api-access-fcj7v\") pod \"barbican-db-sync-88mvr\" (UID: \"4e9fb9c4-2743-4932-8605-f9be30344553\") " pod="openstack/barbican-db-sync-88mvr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106613 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106635 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-config\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106673 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.106688 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-logs\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.108805 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.109525 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-config\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.119743 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.120104 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-logs\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.121791 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.123251 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9fb9c4-2743-4932-8605-f9be30344553-combined-ca-bundle\") pod \"barbican-db-sync-88mvr\" (UID: \"4e9fb9c4-2743-4932-8605-f9be30344553\") " pod="openstack/barbican-db-sync-88mvr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.133469 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-config-data\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.138206 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcj7v\" (UniqueName: \"kubernetes.io/projected/4e9fb9c4-2743-4932-8605-f9be30344553-kube-api-access-fcj7v\") pod \"barbican-db-sync-88mvr\" (UID: \"4e9fb9c4-2743-4932-8605-f9be30344553\") " pod="openstack/barbican-db-sync-88mvr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.144323 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vmxv\" (UniqueName: \"kubernetes.io/projected/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-kube-api-access-8vmxv\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.145780 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-scripts\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.148387 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4e9fb9c4-2743-4932-8605-f9be30344553-db-sync-config-data\") pod \"barbican-db-sync-88mvr\" (UID: \"4e9fb9c4-2743-4932-8605-f9be30344553\") " pod="openstack/barbican-db-sync-88mvr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.149636 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-combined-ca-bundle\") pod \"placement-db-sync-j4ptr\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.159976 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-88mvr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.160860 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zssdp\" (UniqueName: \"kubernetes.io/projected/2d4e46ab-29a5-409d-977b-3c92880d4f62-kube-api-access-zssdp\") pod \"dnsmasq-dns-7987f74bbc-dml6c\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.181495 5050 generic.go:334] "Generic (PLEG): container finished" podID="29e19125-8e55-4724-bfae-9c1f6e90fbf8" containerID="44c9ed1b3e7936e7eab607d2108e7042503e1ed35da75bc77f5d6a575158c9c8" exitCode=0 Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.181533 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" event={"ID":"29e19125-8e55-4724-bfae-9c1f6e90fbf8","Type":"ContainerDied","Data":"44c9ed1b3e7936e7eab607d2108e7042503e1ed35da75bc77f5d6a575158c9c8"} Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.322629 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4kpps" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.390790 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j4ptr" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.437843 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.468584 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xjhh6"] Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.576657 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-8km8s"] Jan 31 05:40:13 crc kubenswrapper[5050]: W0131 05:40:13.661768 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fd585f1_b25d_4a51_9d28_ed8ac1ea3453.slice/crio-a3c5ae5e67612ce81b223f10cd6057bc8092c0bf6858247847355f90708c8e5f WatchSource:0}: Error finding container a3c5ae5e67612ce81b223f10cd6057bc8092c0bf6858247847355f90708c8e5f: Status 404 returned error can't find the container with id a3c5ae5e67612ce81b223f10cd6057bc8092c0bf6858247847355f90708c8e5f Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.754728 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5gld6"] Jan 31 05:40:13 crc kubenswrapper[5050]: W0131 05:40:13.754813 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddad1668e_92d0_48a9_9e34_aa95875ce641.slice/crio-e4544cb9af9fccedd4e4373b86547340ad9501ff61d0a04f2b59f41c1bed8a94 WatchSource:0}: Error finding container e4544cb9af9fccedd4e4373b86547340ad9501ff61d0a04f2b59f41c1bed8a94: Status 404 returned error can't find the container with id e4544cb9af9fccedd4e4373b86547340ad9501ff61d0a04f2b59f41c1bed8a94 Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.882093 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.923998 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.934581 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7f92\" (UniqueName: \"kubernetes.io/projected/29e19125-8e55-4724-bfae-9c1f6e90fbf8-kube-api-access-l7f92\") pod \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.934658 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-ovsdbserver-sb\") pod \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.934681 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-dns-svc\") pod \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.934708 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-config\") pod \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.934729 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-ovsdbserver-nb\") pod \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\" (UID: \"29e19125-8e55-4724-bfae-9c1f6e90fbf8\") " Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.940983 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e19125-8e55-4724-bfae-9c1f6e90fbf8-kube-api-access-l7f92" (OuterVolumeSpecName: "kube-api-access-l7f92") pod "29e19125-8e55-4724-bfae-9c1f6e90fbf8" (UID: "29e19125-8e55-4724-bfae-9c1f6e90fbf8"). InnerVolumeSpecName "kube-api-access-l7f92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.946293 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4kpps"] Jan 31 05:40:13 crc kubenswrapper[5050]: I0131 05:40:13.954307 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-88mvr"] Jan 31 05:40:13 crc kubenswrapper[5050]: W0131 05:40:13.973415 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9f82d6b_5e75_48cd_b642_55d3fa91f520.slice/crio-c14684211e27c0eaedce9593f3efe371496fb771ab6e9117afa2873b3572e492 WatchSource:0}: Error finding container c14684211e27c0eaedce9593f3efe371496fb771ab6e9117afa2873b3572e492: Status 404 returned error can't find the container with id c14684211e27c0eaedce9593f3efe371496fb771ab6e9117afa2873b3572e492 Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.038399 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7f92\" (UniqueName: \"kubernetes.io/projected/29e19125-8e55-4724-bfae-9c1f6e90fbf8-kube-api-access-l7f92\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.046059 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29e19125-8e55-4724-bfae-9c1f6e90fbf8" (UID: "29e19125-8e55-4724-bfae-9c1f6e90fbf8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.071895 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29e19125-8e55-4724-bfae-9c1f6e90fbf8" (UID: "29e19125-8e55-4724-bfae-9c1f6e90fbf8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.079414 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-config" (OuterVolumeSpecName: "config") pod "29e19125-8e55-4724-bfae-9c1f6e90fbf8" (UID: "29e19125-8e55-4724-bfae-9c1f6e90fbf8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.083003 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j4ptr"] Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.085436 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "29e19125-8e55-4724-bfae-9c1f6e90fbf8" (UID: "29e19125-8e55-4724-bfae-9c1f6e90fbf8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.108937 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-dml6c"] Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.139888 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.140227 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.140392 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.140454 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29e19125-8e55-4724-bfae-9c1f6e90fbf8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.192070 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4kpps" event={"ID":"c9f82d6b-5e75-48cd-b642-55d3fa91f520","Type":"ContainerStarted","Data":"85daf693e8813572df891be894023528332c01b59f46c8ac34c40beb6704cb7e"} Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.192136 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4kpps" event={"ID":"c9f82d6b-5e75-48cd-b642-55d3fa91f520","Type":"ContainerStarted","Data":"c14684211e27c0eaedce9593f3efe371496fb771ab6e9117afa2873b3572e492"} Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.194075 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" event={"ID":"2d4e46ab-29a5-409d-977b-3c92880d4f62","Type":"ContainerStarted","Data":"d11f9c67dcf290f4c5409c44c7f19f1d56b5269ac829736bc1671b1d4a35d523"} Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.195783 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5gld6" event={"ID":"dad1668e-92d0-48a9-9e34-aa95875ce641","Type":"ContainerStarted","Data":"e4544cb9af9fccedd4e4373b86547340ad9501ff61d0a04f2b59f41c1bed8a94"} Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.197341 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xjhh6" event={"ID":"dd75c006-d92b-4df4-afb7-65de2aca13da","Type":"ContainerStarted","Data":"d7a0100a127ca366c9dc2c59b309ba982bd26b372e38b3b43419ddb2bb977412"} Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.197446 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xjhh6" event={"ID":"dd75c006-d92b-4df4-afb7-65de2aca13da","Type":"ContainerStarted","Data":"f91cfd400267a3abae291df92b96d841b6d9ec008403a86d62efdc50477246ee"} Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.202197 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-88mvr" event={"ID":"4e9fb9c4-2743-4932-8605-f9be30344553","Type":"ContainerStarted","Data":"fdca5254c6cecec40e7db74311a47194b4fbe1dd07b1a84f14ed8f03053afe9f"} Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.203334 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j4ptr" event={"ID":"5a63fe16-7a6d-429f-bfd4-5dd5db95be12","Type":"ContainerStarted","Data":"0a89b909dac5063f4b4d7f952e7a29d9ef32880e02fa23670a31537f2a4bc40f"} Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.207134 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d053842c-8e88-4a70-b94c-1cd91a50b731","Type":"ContainerStarted","Data":"0787d9bf1e344e2ee6c1a734ffe163f246a69e5f8282f43484b3e1832aba2fdf"} Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.210085 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4kpps" podStartSLOduration=2.210070286 podStartE2EDuration="2.210070286s" podCreationTimestamp="2026-01-31 05:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:40:14.203612369 +0000 UTC m=+1139.252773975" watchObservedRunningTime="2026-01-31 05:40:14.210070286 +0000 UTC m=+1139.259231892" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.214202 5050 generic.go:334] "Generic (PLEG): container finished" podID="6fd585f1-b25d-4a51-9d28-ed8ac1ea3453" containerID="a99c9b1cdbcb0e4b208e7f6ac633e6ee0070a8c6568720ef06460cb24312ae72" exitCode=0 Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.214400 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-8km8s" event={"ID":"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453","Type":"ContainerDied","Data":"a99c9b1cdbcb0e4b208e7f6ac633e6ee0070a8c6568720ef06460cb24312ae72"} Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.214439 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-8km8s" event={"ID":"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453","Type":"ContainerStarted","Data":"a3c5ae5e67612ce81b223f10cd6057bc8092c0bf6858247847355f90708c8e5f"} Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.217437 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" event={"ID":"29e19125-8e55-4724-bfae-9c1f6e90fbf8","Type":"ContainerDied","Data":"d1ddf629eca3d6d2e26a5a73259125df5ef433d0262c98d9e4497630730bb650"} Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.217486 5050 scope.go:117] "RemoveContainer" containerID="44c9ed1b3e7936e7eab607d2108e7042503e1ed35da75bc77f5d6a575158c9c8" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.217647 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-7h5lq" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.226622 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xjhh6" podStartSLOduration=2.226601213 podStartE2EDuration="2.226601213s" podCreationTimestamp="2026-01-31 05:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:40:14.223161093 +0000 UTC m=+1139.272322699" watchObservedRunningTime="2026-01-31 05:40:14.226601213 +0000 UTC m=+1139.275762819" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.257408 5050 scope.go:117] "RemoveContainer" containerID="1dc86c871e3e235d26c2acb7b7580190b007a7923135a9957d5982ec8d6fca4c" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.284098 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-7h5lq"] Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.296369 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-7h5lq"] Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.499073 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.549036 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-ovsdbserver-nb\") pod \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.549094 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-config\") pod \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.549138 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqn5g\" (UniqueName: \"kubernetes.io/projected/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-kube-api-access-bqn5g\") pod \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.549321 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-dns-svc\") pod \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.549349 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-ovsdbserver-sb\") pod \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\" (UID: \"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453\") " Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.556349 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-kube-api-access-bqn5g" (OuterVolumeSpecName: "kube-api-access-bqn5g") pod "6fd585f1-b25d-4a51-9d28-ed8ac1ea3453" (UID: "6fd585f1-b25d-4a51-9d28-ed8ac1ea3453"). InnerVolumeSpecName "kube-api-access-bqn5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.575412 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6fd585f1-b25d-4a51-9d28-ed8ac1ea3453" (UID: "6fd585f1-b25d-4a51-9d28-ed8ac1ea3453"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.583391 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-config" (OuterVolumeSpecName: "config") pod "6fd585f1-b25d-4a51-9d28-ed8ac1ea3453" (UID: "6fd585f1-b25d-4a51-9d28-ed8ac1ea3453"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.584622 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fd585f1-b25d-4a51-9d28-ed8ac1ea3453" (UID: "6fd585f1-b25d-4a51-9d28-ed8ac1ea3453"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.618255 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6fd585f1-b25d-4a51-9d28-ed8ac1ea3453" (UID: "6fd585f1-b25d-4a51-9d28-ed8ac1ea3453"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.652294 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.652321 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.652332 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.652341 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.652352 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqn5g\" (UniqueName: \"kubernetes.io/projected/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453-kube-api-access-bqn5g\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:14 crc kubenswrapper[5050]: I0131 05:40:14.732745 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:40:15 crc kubenswrapper[5050]: I0131 05:40:15.236867 5050 generic.go:334] "Generic (PLEG): container finished" podID="2d4e46ab-29a5-409d-977b-3c92880d4f62" containerID="772612338aa9c9ef17a1f751410aabf602bfd8393c0a751387afb1cecc31ae06" exitCode=0 Jan 31 05:40:15 crc kubenswrapper[5050]: I0131 05:40:15.236935 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" event={"ID":"2d4e46ab-29a5-409d-977b-3c92880d4f62","Type":"ContainerDied","Data":"772612338aa9c9ef17a1f751410aabf602bfd8393c0a751387afb1cecc31ae06"} Jan 31 05:40:15 crc kubenswrapper[5050]: I0131 05:40:15.245047 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-8km8s" Jan 31 05:40:15 crc kubenswrapper[5050]: I0131 05:40:15.251878 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-8km8s" event={"ID":"6fd585f1-b25d-4a51-9d28-ed8ac1ea3453","Type":"ContainerDied","Data":"a3c5ae5e67612ce81b223f10cd6057bc8092c0bf6858247847355f90708c8e5f"} Jan 31 05:40:15 crc kubenswrapper[5050]: I0131 05:40:15.251918 5050 scope.go:117] "RemoveContainer" containerID="a99c9b1cdbcb0e4b208e7f6ac633e6ee0070a8c6568720ef06460cb24312ae72" Jan 31 05:40:15 crc kubenswrapper[5050]: I0131 05:40:15.439400 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-8km8s"] Jan 31 05:40:15 crc kubenswrapper[5050]: I0131 05:40:15.448249 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-8km8s"] Jan 31 05:40:15 crc kubenswrapper[5050]: I0131 05:40:15.784744 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e19125-8e55-4724-bfae-9c1f6e90fbf8" path="/var/lib/kubelet/pods/29e19125-8e55-4724-bfae-9c1f6e90fbf8/volumes" Jan 31 05:40:15 crc kubenswrapper[5050]: I0131 05:40:15.786333 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd585f1-b25d-4a51-9d28-ed8ac1ea3453" path="/var/lib/kubelet/pods/6fd585f1-b25d-4a51-9d28-ed8ac1ea3453/volumes" Jan 31 05:40:16 crc kubenswrapper[5050]: I0131 05:40:16.261875 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" event={"ID":"2d4e46ab-29a5-409d-977b-3c92880d4f62","Type":"ContainerStarted","Data":"c1bda78eaf69c98db29e69077ed67d9a60cab756becd6bd3de755df17c870d7d"} Jan 31 05:40:16 crc kubenswrapper[5050]: I0131 05:40:16.262283 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:16 crc kubenswrapper[5050]: I0131 05:40:16.288912 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" podStartSLOduration=4.288894882 podStartE2EDuration="4.288894882s" podCreationTimestamp="2026-01-31 05:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:40:16.279726595 +0000 UTC m=+1141.328888191" watchObservedRunningTime="2026-01-31 05:40:16.288894882 +0000 UTC m=+1141.338056478" Jan 31 05:40:22 crc kubenswrapper[5050]: I0131 05:40:22.319225 5050 generic.go:334] "Generic (PLEG): container finished" podID="dd75c006-d92b-4df4-afb7-65de2aca13da" containerID="d7a0100a127ca366c9dc2c59b309ba982bd26b372e38b3b43419ddb2bb977412" exitCode=0 Jan 31 05:40:22 crc kubenswrapper[5050]: I0131 05:40:22.320496 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xjhh6" event={"ID":"dd75c006-d92b-4df4-afb7-65de2aca13da","Type":"ContainerDied","Data":"d7a0100a127ca366c9dc2c59b309ba982bd26b372e38b3b43419ddb2bb977412"} Jan 31 05:40:23 crc kubenswrapper[5050]: I0131 05:40:23.440209 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:40:23 crc kubenswrapper[5050]: I0131 05:40:23.496459 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-qdgd8"] Jan 31 05:40:23 crc kubenswrapper[5050]: I0131 05:40:23.496810 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" podUID="074ca3df-49a5-4075-ab96-377ea6feae84" containerName="dnsmasq-dns" containerID="cri-o://a99d17a1bd8e0b394a55c33ac92e0074a09c7e9e953679ecb27d51bd7ce83544" gracePeriod=10 Jan 31 05:40:24 crc kubenswrapper[5050]: I0131 05:40:24.336905 5050 generic.go:334] "Generic (PLEG): container finished" podID="074ca3df-49a5-4075-ab96-377ea6feae84" containerID="a99d17a1bd8e0b394a55c33ac92e0074a09c7e9e953679ecb27d51bd7ce83544" exitCode=0 Jan 31 05:40:24 crc kubenswrapper[5050]: I0131 05:40:24.337097 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" event={"ID":"074ca3df-49a5-4075-ab96-377ea6feae84","Type":"ContainerDied","Data":"a99d17a1bd8e0b394a55c33ac92e0074a09c7e9e953679ecb27d51bd7ce83544"} Jan 31 05:40:24 crc kubenswrapper[5050]: I0131 05:40:24.618712 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" podUID="074ca3df-49a5-4075-ab96-377ea6feae84" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 05:40:29 crc kubenswrapper[5050]: I0131 05:40:29.617687 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" podUID="074ca3df-49a5-4075-ab96-377ea6feae84" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 05:40:34 crc kubenswrapper[5050]: I0131 05:40:34.618218 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" podUID="074ca3df-49a5-4075-ab96-377ea6feae84" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 31 05:40:34 crc kubenswrapper[5050]: I0131 05:40:34.618811 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.668806 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.801729 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-scripts\") pod \"dd75c006-d92b-4df4-afb7-65de2aca13da\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.801780 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-fernet-keys\") pod \"dd75c006-d92b-4df4-afb7-65de2aca13da\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.801807 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-credential-keys\") pod \"dd75c006-d92b-4df4-afb7-65de2aca13da\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.801855 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-config-data\") pod \"dd75c006-d92b-4df4-afb7-65de2aca13da\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.801983 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfmhz\" (UniqueName: \"kubernetes.io/projected/dd75c006-d92b-4df4-afb7-65de2aca13da-kube-api-access-pfmhz\") pod \"dd75c006-d92b-4df4-afb7-65de2aca13da\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.802019 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-combined-ca-bundle\") pod \"dd75c006-d92b-4df4-afb7-65de2aca13da\" (UID: \"dd75c006-d92b-4df4-afb7-65de2aca13da\") " Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.807286 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd75c006-d92b-4df4-afb7-65de2aca13da-kube-api-access-pfmhz" (OuterVolumeSpecName: "kube-api-access-pfmhz") pod "dd75c006-d92b-4df4-afb7-65de2aca13da" (UID: "dd75c006-d92b-4df4-afb7-65de2aca13da"). InnerVolumeSpecName "kube-api-access-pfmhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.807887 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dd75c006-d92b-4df4-afb7-65de2aca13da" (UID: "dd75c006-d92b-4df4-afb7-65de2aca13da"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.808262 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-scripts" (OuterVolumeSpecName: "scripts") pod "dd75c006-d92b-4df4-afb7-65de2aca13da" (UID: "dd75c006-d92b-4df4-afb7-65de2aca13da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.809169 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "dd75c006-d92b-4df4-afb7-65de2aca13da" (UID: "dd75c006-d92b-4df4-afb7-65de2aca13da"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.829701 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-config-data" (OuterVolumeSpecName: "config-data") pod "dd75c006-d92b-4df4-afb7-65de2aca13da" (UID: "dd75c006-d92b-4df4-afb7-65de2aca13da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.833199 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd75c006-d92b-4df4-afb7-65de2aca13da" (UID: "dd75c006-d92b-4df4-afb7-65de2aca13da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.903817 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfmhz\" (UniqueName: \"kubernetes.io/projected/dd75c006-d92b-4df4-afb7-65de2aca13da-kube-api-access-pfmhz\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.904079 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.904173 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.904263 5050 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.904345 5050 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:38 crc kubenswrapper[5050]: I0131 05:40:38.904430 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd75c006-d92b-4df4-afb7-65de2aca13da-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.018148 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.018417 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:40:39 crc kubenswrapper[5050]: E0131 05:40:39.427118 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 31 05:40:39 crc kubenswrapper[5050]: E0131 05:40:39.427408 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcj7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-88mvr_openstack(4e9fb9c4-2743-4932-8605-f9be30344553): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 05:40:39 crc kubenswrapper[5050]: E0131 05:40:39.429781 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-88mvr" podUID="4e9fb9c4-2743-4932-8605-f9be30344553" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.543495 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xjhh6" event={"ID":"dd75c006-d92b-4df4-afb7-65de2aca13da","Type":"ContainerDied","Data":"f91cfd400267a3abae291df92b96d841b6d9ec008403a86d62efdc50477246ee"} Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.543544 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91cfd400267a3abae291df92b96d841b6d9ec008403a86d62efdc50477246ee" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.543544 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xjhh6" Jan 31 05:40:39 crc kubenswrapper[5050]: E0131 05:40:39.545632 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-88mvr" podUID="4e9fb9c4-2743-4932-8605-f9be30344553" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.765269 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xjhh6"] Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.773772 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xjhh6"] Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.859208 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-66vdh"] Jan 31 05:40:39 crc kubenswrapper[5050]: E0131 05:40:39.859601 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd585f1-b25d-4a51-9d28-ed8ac1ea3453" containerName="init" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.859623 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd585f1-b25d-4a51-9d28-ed8ac1ea3453" containerName="init" Jan 31 05:40:39 crc kubenswrapper[5050]: E0131 05:40:39.859646 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd75c006-d92b-4df4-afb7-65de2aca13da" containerName="keystone-bootstrap" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.859657 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd75c006-d92b-4df4-afb7-65de2aca13da" containerName="keystone-bootstrap" Jan 31 05:40:39 crc kubenswrapper[5050]: E0131 05:40:39.859673 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e19125-8e55-4724-bfae-9c1f6e90fbf8" containerName="dnsmasq-dns" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.859681 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e19125-8e55-4724-bfae-9c1f6e90fbf8" containerName="dnsmasq-dns" Jan 31 05:40:39 crc kubenswrapper[5050]: E0131 05:40:39.859692 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e19125-8e55-4724-bfae-9c1f6e90fbf8" containerName="init" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.859700 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e19125-8e55-4724-bfae-9c1f6e90fbf8" containerName="init" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.859903 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd75c006-d92b-4df4-afb7-65de2aca13da" containerName="keystone-bootstrap" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.859927 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e19125-8e55-4724-bfae-9c1f6e90fbf8" containerName="dnsmasq-dns" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.859940 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd585f1-b25d-4a51-9d28-ed8ac1ea3453" containerName="init" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.860564 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.864010 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.864387 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.864539 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.864622 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.865377 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qqp2b" Jan 31 05:40:39 crc kubenswrapper[5050]: I0131 05:40:39.874381 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-66vdh"] Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.022988 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-combined-ca-bundle\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.023047 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br6xs\" (UniqueName: \"kubernetes.io/projected/d00cb797-dd0a-4e75-844f-45a7ddd15d45-kube-api-access-br6xs\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.023084 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-config-data\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.023132 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-fernet-keys\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.023163 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-scripts\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.023223 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-credential-keys\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.125133 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-fernet-keys\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.125177 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-scripts\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.125232 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-credential-keys\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.125307 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-combined-ca-bundle\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.125328 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br6xs\" (UniqueName: \"kubernetes.io/projected/d00cb797-dd0a-4e75-844f-45a7ddd15d45-kube-api-access-br6xs\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.125356 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-config-data\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.132678 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-credential-keys\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.133283 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-combined-ca-bundle\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.134486 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-fernet-keys\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.135736 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-scripts\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.139809 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-config-data\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.144278 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br6xs\" (UniqueName: \"kubernetes.io/projected/d00cb797-dd0a-4e75-844f-45a7ddd15d45-kube-api-access-br6xs\") pod \"keystone-bootstrap-66vdh\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:40 crc kubenswrapper[5050]: I0131 05:40:40.185156 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:40:41 crc kubenswrapper[5050]: I0131 05:40:41.753269 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd75c006-d92b-4df4-afb7-65de2aca13da" path="/var/lib/kubelet/pods/dd75c006-d92b-4df4-afb7-65de2aca13da/volumes" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.289783 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.364257 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-config\") pod \"074ca3df-49a5-4075-ab96-377ea6feae84\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.364341 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-dns-svc\") pod \"074ca3df-49a5-4075-ab96-377ea6feae84\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.364442 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2r6t\" (UniqueName: \"kubernetes.io/projected/074ca3df-49a5-4075-ab96-377ea6feae84-kube-api-access-m2r6t\") pod \"074ca3df-49a5-4075-ab96-377ea6feae84\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.364582 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-ovsdbserver-sb\") pod \"074ca3df-49a5-4075-ab96-377ea6feae84\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.364626 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-ovsdbserver-nb\") pod \"074ca3df-49a5-4075-ab96-377ea6feae84\" (UID: \"074ca3df-49a5-4075-ab96-377ea6feae84\") " Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.368719 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/074ca3df-49a5-4075-ab96-377ea6feae84-kube-api-access-m2r6t" (OuterVolumeSpecName: "kube-api-access-m2r6t") pod "074ca3df-49a5-4075-ab96-377ea6feae84" (UID: "074ca3df-49a5-4075-ab96-377ea6feae84"). InnerVolumeSpecName "kube-api-access-m2r6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.399566 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-config" (OuterVolumeSpecName: "config") pod "074ca3df-49a5-4075-ab96-377ea6feae84" (UID: "074ca3df-49a5-4075-ab96-377ea6feae84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.408633 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "074ca3df-49a5-4075-ab96-377ea6feae84" (UID: "074ca3df-49a5-4075-ab96-377ea6feae84"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.410045 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "074ca3df-49a5-4075-ab96-377ea6feae84" (UID: "074ca3df-49a5-4075-ab96-377ea6feae84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.411083 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "074ca3df-49a5-4075-ab96-377ea6feae84" (UID: "074ca3df-49a5-4075-ab96-377ea6feae84"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.467119 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2r6t\" (UniqueName: \"kubernetes.io/projected/074ca3df-49a5-4075-ab96-377ea6feae84-kube-api-access-m2r6t\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.467151 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.467165 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.467179 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.467190 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/074ca3df-49a5-4075-ab96-377ea6feae84-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.570154 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" event={"ID":"074ca3df-49a5-4075-ab96-377ea6feae84","Type":"ContainerDied","Data":"a97d1e3616c011c7ead301a7ab2af1b96a64aa32d6ba8a9886aab55e141b7772"} Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.570213 5050 scope.go:117] "RemoveContainer" containerID="a99d17a1bd8e0b394a55c33ac92e0074a09c7e9e953679ecb27d51bd7ce83544" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.570232 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.624121 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-qdgd8"] Jan 31 05:40:42 crc kubenswrapper[5050]: I0131 05:40:42.631365 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-qdgd8"] Jan 31 05:40:43 crc kubenswrapper[5050]: I0131 05:40:43.755414 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="074ca3df-49a5-4075-ab96-377ea6feae84" path="/var/lib/kubelet/pods/074ca3df-49a5-4075-ab96-377ea6feae84/volumes" Jan 31 05:40:44 crc kubenswrapper[5050]: I0131 05:40:44.619409 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-qdgd8" podUID="074ca3df-49a5-4075-ab96-377ea6feae84" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Jan 31 05:40:49 crc kubenswrapper[5050]: E0131 05:40:49.259334 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 31 05:40:52 crc kubenswrapper[5050]: E0131 05:40:49.260603 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7fh58h6dh65fh56bh667h669h76h577h8ch586h644h64bh5cbh664h66bh695hcfh96h85h57h55chbch68dh5b6h5d5hd7h54dh644h569h564h5ccq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rqfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d053842c-8e88-4a70-b94c-1cd91a50b731): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 05:40:52 crc kubenswrapper[5050]: I0131 05:40:52.612332 5050 scope.go:117] "RemoveContainer" containerID="5a999813b46f4573cca649d8282c2461f6f7d93a57e957b286edfda6dd9b87c4" Jan 31 05:40:52 crc kubenswrapper[5050]: E0131 05:40:52.748419 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 31 05:40:52 crc kubenswrapper[5050]: E0131 05:40:52.748605 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4pnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5gld6_openstack(dad1668e-92d0-48a9-9e34-aa95875ce641): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 05:40:52 crc kubenswrapper[5050]: E0131 05:40:52.750103 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-5gld6" podUID="dad1668e-92d0-48a9-9e34-aa95875ce641" Jan 31 05:40:53 crc kubenswrapper[5050]: I0131 05:40:53.145475 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-66vdh"] Jan 31 05:40:53 crc kubenswrapper[5050]: W0131 05:40:53.149548 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd00cb797_dd0a_4e75_844f_45a7ddd15d45.slice/crio-ea073270fa1e42664f2fbe25dad040122498213511bcee4a0c122632dd863ba3 WatchSource:0}: Error finding container ea073270fa1e42664f2fbe25dad040122498213511bcee4a0c122632dd863ba3: Status 404 returned error can't find the container with id ea073270fa1e42664f2fbe25dad040122498213511bcee4a0c122632dd863ba3 Jan 31 05:40:53 crc kubenswrapper[5050]: I0131 05:40:53.694252 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j4ptr" event={"ID":"5a63fe16-7a6d-429f-bfd4-5dd5db95be12","Type":"ContainerStarted","Data":"718ca33c6d5cd225bed41d6f32a0b4b9b751af550254b4bfbb3e0144acea1d74"} Jan 31 05:40:53 crc kubenswrapper[5050]: I0131 05:40:53.697740 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-66vdh" event={"ID":"d00cb797-dd0a-4e75-844f-45a7ddd15d45","Type":"ContainerStarted","Data":"d9288ab80fd79ff9533f033bf8ce0811dfde0e6b08ee876c408455cb1593cbb0"} Jan 31 05:40:53 crc kubenswrapper[5050]: I0131 05:40:53.697795 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-66vdh" event={"ID":"d00cb797-dd0a-4e75-844f-45a7ddd15d45","Type":"ContainerStarted","Data":"ea073270fa1e42664f2fbe25dad040122498213511bcee4a0c122632dd863ba3"} Jan 31 05:40:53 crc kubenswrapper[5050]: E0131 05:40:53.701847 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-5gld6" podUID="dad1668e-92d0-48a9-9e34-aa95875ce641" Jan 31 05:40:53 crc kubenswrapper[5050]: I0131 05:40:53.723163 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-j4ptr" podStartSLOduration=13.663682213 podStartE2EDuration="41.723130743s" podCreationTimestamp="2026-01-31 05:40:12 +0000 UTC" firstStartedPulling="2026-01-31 05:40:14.124145611 +0000 UTC m=+1139.173307207" lastFinishedPulling="2026-01-31 05:40:42.183594121 +0000 UTC m=+1167.232755737" observedRunningTime="2026-01-31 05:40:53.722484126 +0000 UTC m=+1178.771645772" watchObservedRunningTime="2026-01-31 05:40:53.723130743 +0000 UTC m=+1178.772292359" Jan 31 05:40:53 crc kubenswrapper[5050]: I0131 05:40:53.779021 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-66vdh" podStartSLOduration=14.778994344000001 podStartE2EDuration="14.778994344s" podCreationTimestamp="2026-01-31 05:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:40:53.769006785 +0000 UTC m=+1178.818168391" watchObservedRunningTime="2026-01-31 05:40:53.778994344 +0000 UTC m=+1178.828155970" Jan 31 05:40:56 crc kubenswrapper[5050]: I0131 05:40:56.752266 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-88mvr" event={"ID":"4e9fb9c4-2743-4932-8605-f9be30344553","Type":"ContainerStarted","Data":"f0dfd2019c58d47e2f8eef513b6d5ae57f2c27fe821a65035ee85f99a4f2aa67"} Jan 31 05:40:56 crc kubenswrapper[5050]: I0131 05:40:56.754687 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d053842c-8e88-4a70-b94c-1cd91a50b731","Type":"ContainerStarted","Data":"b07da9377c56ae66d770c63b8aca2e819f07a03c16d9ca27c0566c0a77feb944"} Jan 31 05:40:56 crc kubenswrapper[5050]: I0131 05:40:56.787283 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-88mvr" podStartSLOduration=2.531568275 podStartE2EDuration="44.787263976s" podCreationTimestamp="2026-01-31 05:40:12 +0000 UTC" firstStartedPulling="2026-01-31 05:40:13.982449516 +0000 UTC m=+1139.031611102" lastFinishedPulling="2026-01-31 05:40:56.238145167 +0000 UTC m=+1181.287306803" observedRunningTime="2026-01-31 05:40:56.785154642 +0000 UTC m=+1181.834316288" watchObservedRunningTime="2026-01-31 05:40:56.787263976 +0000 UTC m=+1181.836425572" Jan 31 05:40:59 crc kubenswrapper[5050]: I0131 05:40:59.791276 5050 generic.go:334] "Generic (PLEG): container finished" podID="d00cb797-dd0a-4e75-844f-45a7ddd15d45" containerID="d9288ab80fd79ff9533f033bf8ce0811dfde0e6b08ee876c408455cb1593cbb0" exitCode=0 Jan 31 05:40:59 crc kubenswrapper[5050]: I0131 05:40:59.791813 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-66vdh" event={"ID":"d00cb797-dd0a-4e75-844f-45a7ddd15d45","Type":"ContainerDied","Data":"d9288ab80fd79ff9533f033bf8ce0811dfde0e6b08ee876c408455cb1593cbb0"} Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.219772 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.316650 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-credential-keys\") pod \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.316742 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-scripts\") pod \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.316769 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-combined-ca-bundle\") pod \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.316797 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br6xs\" (UniqueName: \"kubernetes.io/projected/d00cb797-dd0a-4e75-844f-45a7ddd15d45-kube-api-access-br6xs\") pod \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.316854 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-fernet-keys\") pod \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.316876 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-config-data\") pod \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\" (UID: \"d00cb797-dd0a-4e75-844f-45a7ddd15d45\") " Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.323462 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00cb797-dd0a-4e75-844f-45a7ddd15d45-kube-api-access-br6xs" (OuterVolumeSpecName: "kube-api-access-br6xs") pod "d00cb797-dd0a-4e75-844f-45a7ddd15d45" (UID: "d00cb797-dd0a-4e75-844f-45a7ddd15d45"). InnerVolumeSpecName "kube-api-access-br6xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.323997 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-scripts" (OuterVolumeSpecName: "scripts") pod "d00cb797-dd0a-4e75-844f-45a7ddd15d45" (UID: "d00cb797-dd0a-4e75-844f-45a7ddd15d45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.328579 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d00cb797-dd0a-4e75-844f-45a7ddd15d45" (UID: "d00cb797-dd0a-4e75-844f-45a7ddd15d45"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.328678 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d00cb797-dd0a-4e75-844f-45a7ddd15d45" (UID: "d00cb797-dd0a-4e75-844f-45a7ddd15d45"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.341032 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-config-data" (OuterVolumeSpecName: "config-data") pod "d00cb797-dd0a-4e75-844f-45a7ddd15d45" (UID: "d00cb797-dd0a-4e75-844f-45a7ddd15d45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.350623 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d00cb797-dd0a-4e75-844f-45a7ddd15d45" (UID: "d00cb797-dd0a-4e75-844f-45a7ddd15d45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.418629 5050 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.418664 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.418674 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.418683 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br6xs\" (UniqueName: \"kubernetes.io/projected/d00cb797-dd0a-4e75-844f-45a7ddd15d45-kube-api-access-br6xs\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.418693 5050 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.418701 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00cb797-dd0a-4e75-844f-45a7ddd15d45-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.871661 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-66vdh" event={"ID":"d00cb797-dd0a-4e75-844f-45a7ddd15d45","Type":"ContainerDied","Data":"ea073270fa1e42664f2fbe25dad040122498213511bcee4a0c122632dd863ba3"} Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.871726 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea073270fa1e42664f2fbe25dad040122498213511bcee4a0c122632dd863ba3" Jan 31 05:41:05 crc kubenswrapper[5050]: I0131 05:41:05.871731 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-66vdh" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.374263 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7bf64f7fd-jlmtk"] Jan 31 05:41:06 crc kubenswrapper[5050]: E0131 05:41:06.374663 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074ca3df-49a5-4075-ab96-377ea6feae84" containerName="init" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.374681 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="074ca3df-49a5-4075-ab96-377ea6feae84" containerName="init" Jan 31 05:41:06 crc kubenswrapper[5050]: E0131 05:41:06.374710 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074ca3df-49a5-4075-ab96-377ea6feae84" containerName="dnsmasq-dns" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.374722 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="074ca3df-49a5-4075-ab96-377ea6feae84" containerName="dnsmasq-dns" Jan 31 05:41:06 crc kubenswrapper[5050]: E0131 05:41:06.374743 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00cb797-dd0a-4e75-844f-45a7ddd15d45" containerName="keystone-bootstrap" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.374752 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00cb797-dd0a-4e75-844f-45a7ddd15d45" containerName="keystone-bootstrap" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.374934 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00cb797-dd0a-4e75-844f-45a7ddd15d45" containerName="keystone-bootstrap" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.374992 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="074ca3df-49a5-4075-ab96-377ea6feae84" containerName="dnsmasq-dns" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.375538 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.382830 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.383170 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.383335 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.383561 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.383692 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qqp2b" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.384070 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.407119 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bf64f7fd-jlmtk"] Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.540622 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-fernet-keys\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.540666 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-config-data\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.540687 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-credential-keys\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.540714 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-scripts\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.540822 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-internal-tls-certs\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.540848 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-public-tls-certs\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.540873 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkdrp\" (UniqueName: \"kubernetes.io/projected/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-kube-api-access-mkdrp\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.540903 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-combined-ca-bundle\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.642324 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-combined-ca-bundle\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.642438 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-fernet-keys\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.642468 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-config-data\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.642488 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-credential-keys\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.642523 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-scripts\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.642583 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-internal-tls-certs\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.642616 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-public-tls-certs\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.642651 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkdrp\" (UniqueName: \"kubernetes.io/projected/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-kube-api-access-mkdrp\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.646046 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-scripts\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.646636 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-combined-ca-bundle\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.649650 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-fernet-keys\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.652346 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-public-tls-certs\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.658108 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-internal-tls-certs\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.670336 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-credential-keys\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.672520 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-config-data\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.675497 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkdrp\" (UniqueName: \"kubernetes.io/projected/f146da43-4dcb-46f5-a04b-2c5ef4b11fd8-kube-api-access-mkdrp\") pod \"keystone-7bf64f7fd-jlmtk\" (UID: \"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8\") " pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:06 crc kubenswrapper[5050]: I0131 05:41:06.696735 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:07 crc kubenswrapper[5050]: I0131 05:41:07.143552 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bf64f7fd-jlmtk"] Jan 31 05:41:07 crc kubenswrapper[5050]: W0131 05:41:07.157474 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf146da43_4dcb_46f5_a04b_2c5ef4b11fd8.slice/crio-a87d82941775c8b999867f60730e5abb6ce686b6fceb0d90acbde896f930b3d2 WatchSource:0}: Error finding container a87d82941775c8b999867f60730e5abb6ce686b6fceb0d90acbde896f930b3d2: Status 404 returned error can't find the container with id a87d82941775c8b999867f60730e5abb6ce686b6fceb0d90acbde896f930b3d2 Jan 31 05:41:07 crc kubenswrapper[5050]: I0131 05:41:07.890429 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bf64f7fd-jlmtk" event={"ID":"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8","Type":"ContainerStarted","Data":"a87d82941775c8b999867f60730e5abb6ce686b6fceb0d90acbde896f930b3d2"} Jan 31 05:41:08 crc kubenswrapper[5050]: I0131 05:41:08.908205 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bf64f7fd-jlmtk" event={"ID":"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8","Type":"ContainerStarted","Data":"64f0f348f165390ea1988d2662452a5ceb8269ed7be72c3aebb26815f1c246de"} Jan 31 05:41:09 crc kubenswrapper[5050]: I0131 05:41:09.018362 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:41:09 crc kubenswrapper[5050]: I0131 05:41:09.018428 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:41:10 crc kubenswrapper[5050]: I0131 05:41:10.927778 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:10 crc kubenswrapper[5050]: I0131 05:41:10.957494 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7bf64f7fd-jlmtk" podStartSLOduration=4.957466265 podStartE2EDuration="4.957466265s" podCreationTimestamp="2026-01-31 05:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:41:10.953279117 +0000 UTC m=+1196.002440753" watchObservedRunningTime="2026-01-31 05:41:10.957466265 +0000 UTC m=+1196.006627891" Jan 31 05:41:23 crc kubenswrapper[5050]: I0131 05:41:23.058307 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5gld6" event={"ID":"dad1668e-92d0-48a9-9e34-aa95875ce641","Type":"ContainerStarted","Data":"68f7f56ffae81e641128b37b068c46006d3048daab86f910905070b2f0b5ad97"} Jan 31 05:41:23 crc kubenswrapper[5050]: I0131 05:41:23.062926 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d053842c-8e88-4a70-b94c-1cd91a50b731","Type":"ContainerStarted","Data":"383a5a2ab1f94cfcbf7646bd56e3c1eab9349b702a1a8603708ad53700bddfee"} Jan 31 05:41:23 crc kubenswrapper[5050]: I0131 05:41:23.082391 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5gld6" podStartSLOduration=2.7013439740000003 podStartE2EDuration="1m11.082371811s" podCreationTimestamp="2026-01-31 05:40:12 +0000 UTC" firstStartedPulling="2026-01-31 05:40:13.756262184 +0000 UTC m=+1138.805423780" lastFinishedPulling="2026-01-31 05:41:22.137290021 +0000 UTC m=+1207.186451617" observedRunningTime="2026-01-31 05:41:23.081307544 +0000 UTC m=+1208.130469170" watchObservedRunningTime="2026-01-31 05:41:23.082371811 +0000 UTC m=+1208.131533427" Jan 31 05:41:33 crc kubenswrapper[5050]: E0131 05:41:33.293574 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" Jan 31 05:41:34 crc kubenswrapper[5050]: I0131 05:41:34.161246 5050 generic.go:334] "Generic (PLEG): container finished" podID="5a63fe16-7a6d-429f-bfd4-5dd5db95be12" containerID="718ca33c6d5cd225bed41d6f32a0b4b9b751af550254b4bfbb3e0144acea1d74" exitCode=0 Jan 31 05:41:34 crc kubenswrapper[5050]: I0131 05:41:34.161337 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j4ptr" event={"ID":"5a63fe16-7a6d-429f-bfd4-5dd5db95be12","Type":"ContainerDied","Data":"718ca33c6d5cd225bed41d6f32a0b4b9b751af550254b4bfbb3e0144acea1d74"} Jan 31 05:41:34 crc kubenswrapper[5050]: I0131 05:41:34.166261 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d053842c-8e88-4a70-b94c-1cd91a50b731","Type":"ContainerStarted","Data":"6bf714eac3f1749a195ad48c167788ef0f02e1eb6c69c3892ea341a7cce5b5ac"} Jan 31 05:41:34 crc kubenswrapper[5050]: I0131 05:41:34.166570 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:41:34 crc kubenswrapper[5050]: I0131 05:41:34.166585 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerName="proxy-httpd" containerID="cri-o://6bf714eac3f1749a195ad48c167788ef0f02e1eb6c69c3892ea341a7cce5b5ac" gracePeriod=30 Jan 31 05:41:34 crc kubenswrapper[5050]: I0131 05:41:34.166641 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerName="sg-core" containerID="cri-o://383a5a2ab1f94cfcbf7646bd56e3c1eab9349b702a1a8603708ad53700bddfee" gracePeriod=30 Jan 31 05:41:34 crc kubenswrapper[5050]: I0131 05:41:34.166501 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerName="ceilometer-notification-agent" containerID="cri-o://b07da9377c56ae66d770c63b8aca2e819f07a03c16d9ca27c0566c0a77feb944" gracePeriod=30 Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.177451 5050 generic.go:334] "Generic (PLEG): container finished" podID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerID="6bf714eac3f1749a195ad48c167788ef0f02e1eb6c69c3892ea341a7cce5b5ac" exitCode=0 Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.177727 5050 generic.go:334] "Generic (PLEG): container finished" podID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerID="383a5a2ab1f94cfcbf7646bd56e3c1eab9349b702a1a8603708ad53700bddfee" exitCode=2 Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.177738 5050 generic.go:334] "Generic (PLEG): container finished" podID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerID="b07da9377c56ae66d770c63b8aca2e819f07a03c16d9ca27c0566c0a77feb944" exitCode=0 Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.177927 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d053842c-8e88-4a70-b94c-1cd91a50b731","Type":"ContainerDied","Data":"6bf714eac3f1749a195ad48c167788ef0f02e1eb6c69c3892ea341a7cce5b5ac"} Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.177990 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d053842c-8e88-4a70-b94c-1cd91a50b731","Type":"ContainerDied","Data":"383a5a2ab1f94cfcbf7646bd56e3c1eab9349b702a1a8603708ad53700bddfee"} Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.178000 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d053842c-8e88-4a70-b94c-1cd91a50b731","Type":"ContainerDied","Data":"b07da9377c56ae66d770c63b8aca2e819f07a03c16d9ca27c0566c0a77feb944"} Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.496686 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.505799 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j4ptr" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.592638 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-scripts\") pod \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.592787 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vmxv\" (UniqueName: \"kubernetes.io/projected/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-kube-api-access-8vmxv\") pod \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.592832 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rqfz\" (UniqueName: \"kubernetes.io/projected/d053842c-8e88-4a70-b94c-1cd91a50b731-kube-api-access-6rqfz\") pod \"d053842c-8e88-4a70-b94c-1cd91a50b731\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.592898 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-sg-core-conf-yaml\") pod \"d053842c-8e88-4a70-b94c-1cd91a50b731\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.593005 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-scripts\") pod \"d053842c-8e88-4a70-b94c-1cd91a50b731\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.593054 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-logs\") pod \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.593099 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-config-data\") pod \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.593154 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-combined-ca-bundle\") pod \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\" (UID: \"5a63fe16-7a6d-429f-bfd4-5dd5db95be12\") " Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.593211 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-combined-ca-bundle\") pod \"d053842c-8e88-4a70-b94c-1cd91a50b731\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.593248 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d053842c-8e88-4a70-b94c-1cd91a50b731-run-httpd\") pod \"d053842c-8e88-4a70-b94c-1cd91a50b731\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.593282 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d053842c-8e88-4a70-b94c-1cd91a50b731-log-httpd\") pod \"d053842c-8e88-4a70-b94c-1cd91a50b731\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.593314 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-config-data\") pod \"d053842c-8e88-4a70-b94c-1cd91a50b731\" (UID: \"d053842c-8e88-4a70-b94c-1cd91a50b731\") " Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.594318 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-logs" (OuterVolumeSpecName: "logs") pod "5a63fe16-7a6d-429f-bfd4-5dd5db95be12" (UID: "5a63fe16-7a6d-429f-bfd4-5dd5db95be12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.595727 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d053842c-8e88-4a70-b94c-1cd91a50b731-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d053842c-8e88-4a70-b94c-1cd91a50b731" (UID: "d053842c-8e88-4a70-b94c-1cd91a50b731"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.596168 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d053842c-8e88-4a70-b94c-1cd91a50b731-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d053842c-8e88-4a70-b94c-1cd91a50b731" (UID: "d053842c-8e88-4a70-b94c-1cd91a50b731"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.604118 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-scripts" (OuterVolumeSpecName: "scripts") pod "5a63fe16-7a6d-429f-bfd4-5dd5db95be12" (UID: "5a63fe16-7a6d-429f-bfd4-5dd5db95be12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.604148 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-scripts" (OuterVolumeSpecName: "scripts") pod "d053842c-8e88-4a70-b94c-1cd91a50b731" (UID: "d053842c-8e88-4a70-b94c-1cd91a50b731"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.604256 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-kube-api-access-8vmxv" (OuterVolumeSpecName: "kube-api-access-8vmxv") pod "5a63fe16-7a6d-429f-bfd4-5dd5db95be12" (UID: "5a63fe16-7a6d-429f-bfd4-5dd5db95be12"). InnerVolumeSpecName "kube-api-access-8vmxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.607585 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d053842c-8e88-4a70-b94c-1cd91a50b731-kube-api-access-6rqfz" (OuterVolumeSpecName: "kube-api-access-6rqfz") pod "d053842c-8e88-4a70-b94c-1cd91a50b731" (UID: "d053842c-8e88-4a70-b94c-1cd91a50b731"). InnerVolumeSpecName "kube-api-access-6rqfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.621396 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-config-data" (OuterVolumeSpecName: "config-data") pod "5a63fe16-7a6d-429f-bfd4-5dd5db95be12" (UID: "5a63fe16-7a6d-429f-bfd4-5dd5db95be12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.625085 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d053842c-8e88-4a70-b94c-1cd91a50b731" (UID: "d053842c-8e88-4a70-b94c-1cd91a50b731"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.637963 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d053842c-8e88-4a70-b94c-1cd91a50b731" (UID: "d053842c-8e88-4a70-b94c-1cd91a50b731"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.652369 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a63fe16-7a6d-429f-bfd4-5dd5db95be12" (UID: "5a63fe16-7a6d-429f-bfd4-5dd5db95be12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.665687 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-config-data" (OuterVolumeSpecName: "config-data") pod "d053842c-8e88-4a70-b94c-1cd91a50b731" (UID: "d053842c-8e88-4a70-b94c-1cd91a50b731"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.694814 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.694860 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.694871 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d053842c-8e88-4a70-b94c-1cd91a50b731-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.694879 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d053842c-8e88-4a70-b94c-1cd91a50b731-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.694887 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.694895 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.694903 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vmxv\" (UniqueName: \"kubernetes.io/projected/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-kube-api-access-8vmxv\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.694917 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rqfz\" (UniqueName: \"kubernetes.io/projected/d053842c-8e88-4a70-b94c-1cd91a50b731-kube-api-access-6rqfz\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.694925 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.694933 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d053842c-8e88-4a70-b94c-1cd91a50b731-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.694941 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:35 crc kubenswrapper[5050]: I0131 05:41:35.694965 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a63fe16-7a6d-429f-bfd4-5dd5db95be12-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.187119 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j4ptr" event={"ID":"5a63fe16-7a6d-429f-bfd4-5dd5db95be12","Type":"ContainerDied","Data":"0a89b909dac5063f4b4d7f952e7a29d9ef32880e02fa23670a31537f2a4bc40f"} Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.187164 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a89b909dac5063f4b4d7f952e7a29d9ef32880e02fa23670a31537f2a4bc40f" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.188379 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j4ptr" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.188550 5050 generic.go:334] "Generic (PLEG): container finished" podID="4e9fb9c4-2743-4932-8605-f9be30344553" containerID="f0dfd2019c58d47e2f8eef513b6d5ae57f2c27fe821a65035ee85f99a4f2aa67" exitCode=0 Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.188597 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-88mvr" event={"ID":"4e9fb9c4-2743-4932-8605-f9be30344553","Type":"ContainerDied","Data":"f0dfd2019c58d47e2f8eef513b6d5ae57f2c27fe821a65035ee85f99a4f2aa67"} Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.191181 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d053842c-8e88-4a70-b94c-1cd91a50b731","Type":"ContainerDied","Data":"0787d9bf1e344e2ee6c1a734ffe163f246a69e5f8282f43484b3e1832aba2fdf"} Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.191247 5050 scope.go:117] "RemoveContainer" containerID="6bf714eac3f1749a195ad48c167788ef0f02e1eb6c69c3892ea341a7cce5b5ac" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.191383 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.215060 5050 scope.go:117] "RemoveContainer" containerID="383a5a2ab1f94cfcbf7646bd56e3c1eab9349b702a1a8603708ad53700bddfee" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.239679 5050 scope.go:117] "RemoveContainer" containerID="b07da9377c56ae66d770c63b8aca2e819f07a03c16d9ca27c0566c0a77feb944" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.309395 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.356784 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.389367 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:41:36 crc kubenswrapper[5050]: E0131 05:41:36.389757 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerName="ceilometer-notification-agent" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.389788 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerName="ceilometer-notification-agent" Jan 31 05:41:36 crc kubenswrapper[5050]: E0131 05:41:36.389811 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a63fe16-7a6d-429f-bfd4-5dd5db95be12" containerName="placement-db-sync" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.389819 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a63fe16-7a6d-429f-bfd4-5dd5db95be12" containerName="placement-db-sync" Jan 31 05:41:36 crc kubenswrapper[5050]: E0131 05:41:36.389831 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerName="proxy-httpd" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.389839 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerName="proxy-httpd" Jan 31 05:41:36 crc kubenswrapper[5050]: E0131 05:41:36.389851 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerName="sg-core" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.389858 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerName="sg-core" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.390072 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerName="ceilometer-notification-agent" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.390094 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerName="proxy-httpd" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.390107 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a63fe16-7a6d-429f-bfd4-5dd5db95be12" containerName="placement-db-sync" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.390120 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" containerName="sg-core" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.392188 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.398239 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.399117 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.426230 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.440815 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d9964f68-4b9hp"] Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.442011 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.444783 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.445869 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.446155 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.446229 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cnmmt" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.446995 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.459391 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d9964f68-4b9hp"] Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.467594 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:41:36 crc kubenswrapper[5050]: E0131 05:41:36.468042 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-wzzqv log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[combined-ca-bundle config-data kube-api-access-wzzqv log-httpd run-httpd scripts sg-core-conf-yaml]: context canceled" pod="openstack/ceilometer-0" podUID="b43085dd-f2f1-41c4-8a7f-20c34c8d224c" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518092 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-run-httpd\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518138 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-log-httpd\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518172 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518218 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-scripts\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518242 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518264 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-config-data\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518284 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-scripts\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518301 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzzqv\" (UniqueName: \"kubernetes.io/projected/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-kube-api-access-wzzqv\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518317 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-public-tls-certs\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518336 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-internal-tls-certs\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518361 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-combined-ca-bundle\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518459 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-config-data\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518523 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b6bfc63-e5ee-4960-8b68-bf9be807990c-logs\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.518646 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsk85\" (UniqueName: \"kubernetes.io/projected/9b6bfc63-e5ee-4960-8b68-bf9be807990c-kube-api-access-bsk85\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.619933 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.620138 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-scripts\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.620197 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.620243 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-config-data\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.620294 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-scripts\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.620328 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzzqv\" (UniqueName: \"kubernetes.io/projected/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-kube-api-access-wzzqv\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.620934 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-public-tls-certs\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.621079 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-internal-tls-certs\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.621215 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-combined-ca-bundle\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.621401 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-config-data\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.622156 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b6bfc63-e5ee-4960-8b68-bf9be807990c-logs\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.622235 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsk85\" (UniqueName: \"kubernetes.io/projected/9b6bfc63-e5ee-4960-8b68-bf9be807990c-kube-api-access-bsk85\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.622288 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-run-httpd\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.622330 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-log-httpd\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.623695 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b6bfc63-e5ee-4960-8b68-bf9be807990c-logs\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.623777 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-run-httpd\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.624030 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-log-httpd\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.624264 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-scripts\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.624565 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-config-data\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.625033 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.625715 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-public-tls-certs\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.626483 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-internal-tls-certs\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.626657 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.628529 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-config-data\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.629818 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6bfc63-e5ee-4960-8b68-bf9be807990c-combined-ca-bundle\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.634838 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-scripts\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.642448 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzzqv\" (UniqueName: \"kubernetes.io/projected/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-kube-api-access-wzzqv\") pod \"ceilometer-0\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " pod="openstack/ceilometer-0" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.642609 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsk85\" (UniqueName: \"kubernetes.io/projected/9b6bfc63-e5ee-4960-8b68-bf9be807990c-kube-api-access-bsk85\") pod \"placement-d9964f68-4b9hp\" (UID: \"9b6bfc63-e5ee-4960-8b68-bf9be807990c\") " pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:36 crc kubenswrapper[5050]: I0131 05:41:36.766348 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.200140 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.213217 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.261638 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d9964f68-4b9hp"] Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.332511 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-log-httpd\") pod \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.332564 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-scripts\") pod \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.332667 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-config-data\") pod \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.332704 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-run-httpd\") pod \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.332804 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-combined-ca-bundle\") pod \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.332852 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzzqv\" (UniqueName: \"kubernetes.io/projected/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-kube-api-access-wzzqv\") pod \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.332889 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-sg-core-conf-yaml\") pod \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\" (UID: \"b43085dd-f2f1-41c4-8a7f-20c34c8d224c\") " Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.333549 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b43085dd-f2f1-41c4-8a7f-20c34c8d224c" (UID: "b43085dd-f2f1-41c4-8a7f-20c34c8d224c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.338395 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-kube-api-access-wzzqv" (OuterVolumeSpecName: "kube-api-access-wzzqv") pod "b43085dd-f2f1-41c4-8a7f-20c34c8d224c" (UID: "b43085dd-f2f1-41c4-8a7f-20c34c8d224c"). InnerVolumeSpecName "kube-api-access-wzzqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.339422 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b43085dd-f2f1-41c4-8a7f-20c34c8d224c" (UID: "b43085dd-f2f1-41c4-8a7f-20c34c8d224c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.343384 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b43085dd-f2f1-41c4-8a7f-20c34c8d224c" (UID: "b43085dd-f2f1-41c4-8a7f-20c34c8d224c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.352461 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-config-data" (OuterVolumeSpecName: "config-data") pod "b43085dd-f2f1-41c4-8a7f-20c34c8d224c" (UID: "b43085dd-f2f1-41c4-8a7f-20c34c8d224c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.353618 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-scripts" (OuterVolumeSpecName: "scripts") pod "b43085dd-f2f1-41c4-8a7f-20c34c8d224c" (UID: "b43085dd-f2f1-41c4-8a7f-20c34c8d224c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.354149 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b43085dd-f2f1-41c4-8a7f-20c34c8d224c" (UID: "b43085dd-f2f1-41c4-8a7f-20c34c8d224c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.434434 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.434475 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.434500 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.434521 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzzqv\" (UniqueName: \"kubernetes.io/projected/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-kube-api-access-wzzqv\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.434533 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.434543 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.434554 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43085dd-f2f1-41c4-8a7f-20c34c8d224c-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.609741 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-88mvr" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.739893 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4e9fb9c4-2743-4932-8605-f9be30344553-db-sync-config-data\") pod \"4e9fb9c4-2743-4932-8605-f9be30344553\" (UID: \"4e9fb9c4-2743-4932-8605-f9be30344553\") " Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.740233 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcj7v\" (UniqueName: \"kubernetes.io/projected/4e9fb9c4-2743-4932-8605-f9be30344553-kube-api-access-fcj7v\") pod \"4e9fb9c4-2743-4932-8605-f9be30344553\" (UID: \"4e9fb9c4-2743-4932-8605-f9be30344553\") " Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.740299 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9fb9c4-2743-4932-8605-f9be30344553-combined-ca-bundle\") pod \"4e9fb9c4-2743-4932-8605-f9be30344553\" (UID: \"4e9fb9c4-2743-4932-8605-f9be30344553\") " Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.744791 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e9fb9c4-2743-4932-8605-f9be30344553-kube-api-access-fcj7v" (OuterVolumeSpecName: "kube-api-access-fcj7v") pod "4e9fb9c4-2743-4932-8605-f9be30344553" (UID: "4e9fb9c4-2743-4932-8605-f9be30344553"). InnerVolumeSpecName "kube-api-access-fcj7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.746454 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e9fb9c4-2743-4932-8605-f9be30344553-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4e9fb9c4-2743-4932-8605-f9be30344553" (UID: "4e9fb9c4-2743-4932-8605-f9be30344553"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.746680 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d053842c-8e88-4a70-b94c-1cd91a50b731" path="/var/lib/kubelet/pods/d053842c-8e88-4a70-b94c-1cd91a50b731/volumes" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.773191 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e9fb9c4-2743-4932-8605-f9be30344553-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e9fb9c4-2743-4932-8605-f9be30344553" (UID: "4e9fb9c4-2743-4932-8605-f9be30344553"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.852278 5050 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4e9fb9c4-2743-4932-8605-f9be30344553-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.852328 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcj7v\" (UniqueName: \"kubernetes.io/projected/4e9fb9c4-2743-4932-8605-f9be30344553-kube-api-access-fcj7v\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:37 crc kubenswrapper[5050]: I0131 05:41:37.852342 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9fb9c4-2743-4932-8605-f9be30344553-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.208067 5050 generic.go:334] "Generic (PLEG): container finished" podID="dad1668e-92d0-48a9-9e34-aa95875ce641" containerID="68f7f56ffae81e641128b37b068c46006d3048daab86f910905070b2f0b5ad97" exitCode=0 Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.208131 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5gld6" event={"ID":"dad1668e-92d0-48a9-9e34-aa95875ce641","Type":"ContainerDied","Data":"68f7f56ffae81e641128b37b068c46006d3048daab86f910905070b2f0b5ad97"} Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.209342 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-88mvr" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.209337 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-88mvr" event={"ID":"4e9fb9c4-2743-4932-8605-f9be30344553","Type":"ContainerDied","Data":"fdca5254c6cecec40e7db74311a47194b4fbe1dd07b1a84f14ed8f03053afe9f"} Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.209577 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdca5254c6cecec40e7db74311a47194b4fbe1dd07b1a84f14ed8f03053afe9f" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.210822 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.212066 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9964f68-4b9hp" event={"ID":"9b6bfc63-e5ee-4960-8b68-bf9be807990c","Type":"ContainerStarted","Data":"e821005ab53f531609160b34c742e3be07918f27b1234d0d0412489c3bb20559"} Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.212086 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9964f68-4b9hp" event={"ID":"9b6bfc63-e5ee-4960-8b68-bf9be807990c","Type":"ContainerStarted","Data":"967aad0a7f8f2eb49ba6891c005cc8b6450aa50f5112c1c48badc57952de31be"} Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.212103 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.212115 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9964f68-4b9hp" event={"ID":"9b6bfc63-e5ee-4960-8b68-bf9be807990c","Type":"ContainerStarted","Data":"e7e445c8abdf9876d3d55d8383552443ad0fd3e7f8f9439498430f18df61cdbc"} Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.212237 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.242410 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.284029 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.294646 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.310128 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:41:38 crc kubenswrapper[5050]: E0131 05:41:38.310483 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9fb9c4-2743-4932-8605-f9be30344553" containerName="barbican-db-sync" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.310494 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9fb9c4-2743-4932-8605-f9be30344553" containerName="barbican-db-sync" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.310644 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e9fb9c4-2743-4932-8605-f9be30344553" containerName="barbican-db-sync" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.311908 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.315340 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d9964f68-4b9hp" podStartSLOduration=2.315324733 podStartE2EDuration="2.315324733s" podCreationTimestamp="2026-01-31 05:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:41:38.302568114 +0000 UTC m=+1223.351729710" watchObservedRunningTime="2026-01-31 05:41:38.315324733 +0000 UTC m=+1223.364486329" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.317617 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.317872 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.350984 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.482691 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-run-httpd\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.482741 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.482789 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-log-httpd\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.482814 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.482852 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-config-data\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.482870 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-scripts\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.482886 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6wv9\" (UniqueName: \"kubernetes.io/projected/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-kube-api-access-x6wv9\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.485678 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7d7f4bb587-ddb7l"] Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.491060 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.494616 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-884nh" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.494837 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.494975 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.527902 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d7f4bb587-ddb7l"] Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.544373 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-c797994c8-m9z4k"] Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.545686 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.550459 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.573859 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c797994c8-m9z4k"] Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.584858 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-run-httpd\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.585149 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.585246 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-combined-ca-bundle\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.585354 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-config-data-custom\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.585439 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-log-httpd\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.585525 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.585607 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7797d\" (UniqueName: \"kubernetes.io/projected/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-kube-api-access-7797d\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.585684 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-config-data\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.585762 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-logs\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.585844 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-config-data\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.585917 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-scripts\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.586010 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6wv9\" (UniqueName: \"kubernetes.io/projected/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-kube-api-access-x6wv9\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.586863 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-run-httpd\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.594299 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-log-httpd\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.600905 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-config-data\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.602026 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.607472 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-scripts\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.617428 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.617967 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6wv9\" (UniqueName: \"kubernetes.io/projected/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-kube-api-access-x6wv9\") pod \"ceilometer-0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.631126 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699df9757c-22lpd"] Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.634159 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.655915 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.665298 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-22lpd"] Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.688911 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-config-data-custom\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.689085 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d8a8a39-709e-45ee-8694-2e648feebbae-config-data\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.689122 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7797d\" (UniqueName: \"kubernetes.io/projected/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-kube-api-access-7797d\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.689143 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d8a8a39-709e-45ee-8694-2e648feebbae-config-data-custom\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.689161 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-config-data\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.689183 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-logs\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.689235 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d8a8a39-709e-45ee-8694-2e648feebbae-combined-ca-bundle\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.689279 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d8a8a39-709e-45ee-8694-2e648feebbae-logs\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.689296 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-combined-ca-bundle\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.689318 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6dkq\" (UniqueName: \"kubernetes.io/projected/8d8a8a39-709e-45ee-8694-2e648feebbae-kube-api-access-g6dkq\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.689749 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-logs\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.701438 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-config-data\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.708922 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-config-data-custom\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.711574 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-combined-ca-bundle\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.717301 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7797d\" (UniqueName: \"kubernetes.io/projected/4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b-kube-api-access-7797d\") pod \"barbican-worker-7d7f4bb587-ddb7l\" (UID: \"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b\") " pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.718600 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-648cf7894d-hsztl"] Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.719964 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.726445 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.738515 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-648cf7894d-hsztl"] Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.798832 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.798896 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdpsn\" (UniqueName: \"kubernetes.io/projected/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-kube-api-access-hdpsn\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.798938 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d8a8a39-709e-45ee-8694-2e648feebbae-config-data\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.799002 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d8a8a39-709e-45ee-8694-2e648feebbae-config-data-custom\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.799066 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-dns-svc\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.799107 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d8a8a39-709e-45ee-8694-2e648feebbae-combined-ca-bundle\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.799132 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-config\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.799155 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.799176 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d8a8a39-709e-45ee-8694-2e648feebbae-logs\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.799200 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6dkq\" (UniqueName: \"kubernetes.io/projected/8d8a8a39-709e-45ee-8694-2e648feebbae-kube-api-access-g6dkq\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.799928 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d8a8a39-709e-45ee-8694-2e648feebbae-logs\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.802380 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d8a8a39-709e-45ee-8694-2e648feebbae-config-data-custom\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.803331 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d8a8a39-709e-45ee-8694-2e648feebbae-combined-ca-bundle\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.806045 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d8a8a39-709e-45ee-8694-2e648feebbae-config-data\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.816995 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d7f4bb587-ddb7l" Jan 31 05:41:38 crc kubenswrapper[5050]: I0131 05:41:38.832454 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6dkq\" (UniqueName: \"kubernetes.io/projected/8d8a8a39-709e-45ee-8694-2e648feebbae-kube-api-access-g6dkq\") pod \"barbican-keystone-listener-c797994c8-m9z4k\" (UID: \"8d8a8a39-709e-45ee-8694-2e648feebbae\") " pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.866366 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.901545 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-combined-ca-bundle\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.901589 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-config\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.901620 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.901651 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fce74dc-b894-413b-85d2-0b16ab6808e1-logs\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.901692 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.901714 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl2z7\" (UniqueName: \"kubernetes.io/projected/2fce74dc-b894-413b-85d2-0b16ab6808e1-kube-api-access-kl2z7\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.901734 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdpsn\" (UniqueName: \"kubernetes.io/projected/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-kube-api-access-hdpsn\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.901766 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-config-data-custom\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.901812 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-config-data\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.901834 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-dns-svc\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.903033 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-dns-svc\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.903659 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-config\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.903871 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.904080 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.918943 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdpsn\" (UniqueName: \"kubernetes.io/projected/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-kube-api-access-hdpsn\") pod \"dnsmasq-dns-699df9757c-22lpd\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:38.996507 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.003065 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-config-data\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.003156 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-combined-ca-bundle\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.003235 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fce74dc-b894-413b-85d2-0b16ab6808e1-logs\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.003293 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl2z7\" (UniqueName: \"kubernetes.io/projected/2fce74dc-b894-413b-85d2-0b16ab6808e1-kube-api-access-kl2z7\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.003324 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-config-data-custom\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.003649 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fce74dc-b894-413b-85d2-0b16ab6808e1-logs\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.008005 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-combined-ca-bundle\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.008222 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-config-data\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.009597 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-config-data-custom\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.020347 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl2z7\" (UniqueName: \"kubernetes.io/projected/2fce74dc-b894-413b-85d2-0b16ab6808e1-kube-api-access-kl2z7\") pod \"barbican-api-648cf7894d-hsztl\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.029528 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.029577 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.029622 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.030425 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37867fe0b3a3a54da7bcbf64f0d3572ca6af3a27ac44fef3f2c635dee432f98f"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.030493 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://37867fe0b3a3a54da7bcbf64f0d3572ca6af3a27ac44fef3f2c635dee432f98f" gracePeriod=600 Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.117344 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.226177 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="37867fe0b3a3a54da7bcbf64f0d3572ca6af3a27ac44fef3f2c635dee432f98f" exitCode=0 Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.226392 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"37867fe0b3a3a54da7bcbf64f0d3572ca6af3a27ac44fef3f2c635dee432f98f"} Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.226460 5050 scope.go:117] "RemoveContainer" containerID="28ca310875e65cf5e9290eaf5b0d71245b16dc8b0b1ac33324bea4c715946d1f" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.761776 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43085dd-f2f1-41c4-8a7f-20c34c8d224c" path="/var/lib/kubelet/pods/b43085dd-f2f1-41c4-8a7f-20c34c8d224c/volumes" Jan 31 05:41:39 crc kubenswrapper[5050]: I0131 05:41:39.934990 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5gld6" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.020418 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d7f4bb587-ddb7l"] Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.022331 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dad1668e-92d0-48a9-9e34-aa95875ce641-etc-machine-id\") pod \"dad1668e-92d0-48a9-9e34-aa95875ce641\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.022368 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-scripts\") pod \"dad1668e-92d0-48a9-9e34-aa95875ce641\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.022405 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-config-data\") pod \"dad1668e-92d0-48a9-9e34-aa95875ce641\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.022427 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-db-sync-config-data\") pod \"dad1668e-92d0-48a9-9e34-aa95875ce641\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.022587 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4pnw\" (UniqueName: \"kubernetes.io/projected/dad1668e-92d0-48a9-9e34-aa95875ce641-kube-api-access-l4pnw\") pod \"dad1668e-92d0-48a9-9e34-aa95875ce641\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.022647 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-combined-ca-bundle\") pod \"dad1668e-92d0-48a9-9e34-aa95875ce641\" (UID: \"dad1668e-92d0-48a9-9e34-aa95875ce641\") " Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.025862 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dad1668e-92d0-48a9-9e34-aa95875ce641-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dad1668e-92d0-48a9-9e34-aa95875ce641" (UID: "dad1668e-92d0-48a9-9e34-aa95875ce641"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:41:40 crc kubenswrapper[5050]: W0131 05:41:40.028139 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b0dfbd1_ac80_477b_8dd6_b283bd4e2a6b.slice/crio-0fec9b128f434257a732c9e6cd919bda4f60cd439b2e2e560df1989f0e7a8d3d WatchSource:0}: Error finding container 0fec9b128f434257a732c9e6cd919bda4f60cd439b2e2e560df1989f0e7a8d3d: Status 404 returned error can't find the container with id 0fec9b128f434257a732c9e6cd919bda4f60cd439b2e2e560df1989f0e7a8d3d Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.028175 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-22lpd"] Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.036398 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-648cf7894d-hsztl"] Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.039352 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad1668e-92d0-48a9-9e34-aa95875ce641-kube-api-access-l4pnw" (OuterVolumeSpecName: "kube-api-access-l4pnw") pod "dad1668e-92d0-48a9-9e34-aa95875ce641" (UID: "dad1668e-92d0-48a9-9e34-aa95875ce641"). InnerVolumeSpecName "kube-api-access-l4pnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.039867 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "dad1668e-92d0-48a9-9e34-aa95875ce641" (UID: "dad1668e-92d0-48a9-9e34-aa95875ce641"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.042707 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-scripts" (OuterVolumeSpecName: "scripts") pod "dad1668e-92d0-48a9-9e34-aa95875ce641" (UID: "dad1668e-92d0-48a9-9e34-aa95875ce641"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.078080 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dad1668e-92d0-48a9-9e34-aa95875ce641" (UID: "dad1668e-92d0-48a9-9e34-aa95875ce641"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.105064 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-config-data" (OuterVolumeSpecName: "config-data") pod "dad1668e-92d0-48a9-9e34-aa95875ce641" (UID: "dad1668e-92d0-48a9-9e34-aa95875ce641"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.124921 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4pnw\" (UniqueName: \"kubernetes.io/projected/dad1668e-92d0-48a9-9e34-aa95875ce641-kube-api-access-l4pnw\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.124967 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.124976 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dad1668e-92d0-48a9-9e34-aa95875ce641-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.124984 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.124992 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.125001 5050 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dad1668e-92d0-48a9-9e34-aa95875ce641-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.223080 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:41:40 crc kubenswrapper[5050]: W0131 05:41:40.227316 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a1334ea_e70a_4cd5_ae82_7013bf3a8ee0.slice/crio-c9881476728faa88a9db4b40ea1f7b0234a576938052be70ad82d3848c2c9d9e WatchSource:0}: Error finding container c9881476728faa88a9db4b40ea1f7b0234a576938052be70ad82d3848c2c9d9e: Status 404 returned error can't find the container with id c9881476728faa88a9db4b40ea1f7b0234a576938052be70ad82d3848c2c9d9e Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.229840 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c797994c8-m9z4k"] Jan 31 05:41:40 crc kubenswrapper[5050]: W0131 05:41:40.231324 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d8a8a39_709e_45ee_8694_2e648feebbae.slice/crio-415ee18bd7e00abd15a97699beea27353969e3e5c810e4e6ce5f915da7fb5eb7 WatchSource:0}: Error finding container 415ee18bd7e00abd15a97699beea27353969e3e5c810e4e6ce5f915da7fb5eb7: Status 404 returned error can't find the container with id 415ee18bd7e00abd15a97699beea27353969e3e5c810e4e6ce5f915da7fb5eb7 Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.241891 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-648cf7894d-hsztl" event={"ID":"2fce74dc-b894-413b-85d2-0b16ab6808e1","Type":"ContainerStarted","Data":"53fd08ff879621937bc6e6510bab1cbcf10841d208defd983698aa5aeca2f2e2"} Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.250609 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5gld6" event={"ID":"dad1668e-92d0-48a9-9e34-aa95875ce641","Type":"ContainerDied","Data":"e4544cb9af9fccedd4e4373b86547340ad9501ff61d0a04f2b59f41c1bed8a94"} Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.250648 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4544cb9af9fccedd4e4373b86547340ad9501ff61d0a04f2b59f41c1bed8a94" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.250700 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5gld6" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.269494 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-22lpd" event={"ID":"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6","Type":"ContainerStarted","Data":"9d19d0996285bd08857fa10bba7eeb0286b9807ff54013bec7705d89eaea7c3c"} Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.276914 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"a251b39bb9c1d28bca8640aed32573ece3622a90bd61ebf25455027ba42bf7e7"} Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.279750 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d7f4bb587-ddb7l" event={"ID":"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b","Type":"ContainerStarted","Data":"0fec9b128f434257a732c9e6cd919bda4f60cd439b2e2e560df1989f0e7a8d3d"} Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.623204 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-22lpd"] Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.648111 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:41:40 crc kubenswrapper[5050]: E0131 05:41:40.648457 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad1668e-92d0-48a9-9e34-aa95875ce641" containerName="cinder-db-sync" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.648473 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad1668e-92d0-48a9-9e34-aa95875ce641" containerName="cinder-db-sync" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.648637 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad1668e-92d0-48a9-9e34-aa95875ce641" containerName="cinder-db-sync" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.649511 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.678394 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.687499 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.687705 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.690711 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.700638 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-w5fzq" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.728924 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-zw6q9"] Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.730306 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.763855 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.763896 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.763975 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-config-data\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.763995 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftd6j\" (UniqueName: \"kubernetes.io/projected/a4951693-452d-4484-88cf-692f800e1160-kube-api-access-ftd6j\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.764027 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4951693-452d-4484-88cf-692f800e1160-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.764077 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-scripts\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.783397 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-zw6q9"] Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.840600 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.855247 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.855346 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.857846 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.868082 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-ovsdbserver-sb\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.868120 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-config-data\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.868141 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-config\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.868159 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftd6j\" (UniqueName: \"kubernetes.io/projected/a4951693-452d-4484-88cf-692f800e1160-kube-api-access-ftd6j\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.868187 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4951693-452d-4484-88cf-692f800e1160-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.868210 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z77q\" (UniqueName: \"kubernetes.io/projected/d544bf99-86ca-41e6-9b6d-c19906cbf426-kube-api-access-4z77q\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.868240 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-dns-svc\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.868286 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-scripts\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.868333 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.868350 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.868380 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-ovsdbserver-nb\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.868649 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4951693-452d-4484-88cf-692f800e1160-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.876066 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-config-data\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.877121 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.889454 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.891208 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-scripts\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.897444 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftd6j\" (UniqueName: \"kubernetes.io/projected/a4951693-452d-4484-88cf-692f800e1160-kube-api-access-ftd6j\") pod \"cinder-scheduler-0\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " pod="openstack/cinder-scheduler-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.971651 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-config-data\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.971696 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.971715 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-config-data-custom\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.971745 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2rvh\" (UniqueName: \"kubernetes.io/projected/e05f0444-2e99-415b-9fb5-b309bb93518d-kube-api-access-x2rvh\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.971763 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-scripts\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.971807 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-ovsdbserver-nb\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.971833 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e05f0444-2e99-415b-9fb5-b309bb93518d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.971856 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e05f0444-2e99-415b-9fb5-b309bb93518d-logs\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.971890 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-ovsdbserver-sb\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.971916 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-config\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.971966 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z77q\" (UniqueName: \"kubernetes.io/projected/d544bf99-86ca-41e6-9b6d-c19906cbf426-kube-api-access-4z77q\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.971996 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-dns-svc\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.974465 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-dns-svc\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.974607 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-ovsdbserver-sb\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.974667 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-ovsdbserver-nb\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:40 crc kubenswrapper[5050]: I0131 05:41:40.975061 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-config\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.000473 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z77q\" (UniqueName: \"kubernetes.io/projected/d544bf99-86ca-41e6-9b6d-c19906cbf426-kube-api-access-4z77q\") pod \"dnsmasq-dns-5b76cdf485-zw6q9\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.001268 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.070315 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.074900 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e05f0444-2e99-415b-9fb5-b309bb93518d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.074934 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e05f0444-2e99-415b-9fb5-b309bb93518d-logs\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.075082 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-config-data\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.075103 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.075118 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-config-data-custom\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.075149 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2rvh\" (UniqueName: \"kubernetes.io/projected/e05f0444-2e99-415b-9fb5-b309bb93518d-kube-api-access-x2rvh\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.075169 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-scripts\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.080304 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e05f0444-2e99-415b-9fb5-b309bb93518d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.080726 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e05f0444-2e99-415b-9fb5-b309bb93518d-logs\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.081410 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-config-data\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.081704 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-scripts\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.087470 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.091599 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-config-data-custom\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.112626 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2rvh\" (UniqueName: \"kubernetes.io/projected/e05f0444-2e99-415b-9fb5-b309bb93518d-kube-api-access-x2rvh\") pod \"cinder-api-0\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.259896 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.303683 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" event={"ID":"8d8a8a39-709e-45ee-8694-2e648feebbae","Type":"ContainerStarted","Data":"415ee18bd7e00abd15a97699beea27353969e3e5c810e4e6ce5f915da7fb5eb7"} Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.307151 5050 generic.go:334] "Generic (PLEG): container finished" podID="6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6" containerID="cb5fffaf0791c4e3822beacc47e8c794bea70de3e22f18109e590fbd11fcea88" exitCode=0 Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.307206 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-22lpd" event={"ID":"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6","Type":"ContainerDied","Data":"cb5fffaf0791c4e3822beacc47e8c794bea70de3e22f18109e590fbd11fcea88"} Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.310524 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-648cf7894d-hsztl" event={"ID":"2fce74dc-b894-413b-85d2-0b16ab6808e1","Type":"ContainerStarted","Data":"2a2a0fa4a8bf6bf0b27ec912f5e73042d6714655f74665522e07fe6a702b299e"} Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.310547 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-648cf7894d-hsztl" event={"ID":"2fce74dc-b894-413b-85d2-0b16ab6808e1","Type":"ContainerStarted","Data":"d0380ac7192163b25a8431d9a7dccddfdeaf903fe2ba8746c1c92276876c0d63"} Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.311079 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.311108 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.315360 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0","Type":"ContainerStarted","Data":"4193f76852747704db40eea62cf50bc6746bcb598b82a8df745e617fa1401e5d"} Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.315383 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0","Type":"ContainerStarted","Data":"c9881476728faa88a9db4b40ea1f7b0234a576938052be70ad82d3848c2c9d9e"} Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.361536 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-648cf7894d-hsztl" podStartSLOduration=3.361519784 podStartE2EDuration="3.361519784s" podCreationTimestamp="2026-01-31 05:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:41:41.360962959 +0000 UTC m=+1226.410124555" watchObservedRunningTime="2026-01-31 05:41:41.361519784 +0000 UTC m=+1226.410681380" Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.560297 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.715354 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-zw6q9"] Jan 31 05:41:41 crc kubenswrapper[5050]: I0131 05:41:41.949868 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:41:42 crc kubenswrapper[5050]: W0131 05:41:42.299788 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode05f0444_2e99_415b_9fb5_b309bb93518d.slice/crio-c9dfaf88eec01591d8fba0a88bb4906af9836ca0eb9b48a58c86f608090cdd3b WatchSource:0}: Error finding container c9dfaf88eec01591d8fba0a88bb4906af9836ca0eb9b48a58c86f608090cdd3b: Status 404 returned error can't find the container with id c9dfaf88eec01591d8fba0a88bb4906af9836ca0eb9b48a58c86f608090cdd3b Jan 31 05:41:42 crc kubenswrapper[5050]: E0131 05:41:42.312712 5050 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 31 05:41:42 crc kubenswrapper[5050]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 31 05:41:42 crc kubenswrapper[5050]: > podSandboxID="9d19d0996285bd08857fa10bba7eeb0286b9807ff54013bec7705d89eaea7c3c" Jan 31 05:41:42 crc kubenswrapper[5050]: E0131 05:41:42.312866 5050 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 31 05:41:42 crc kubenswrapper[5050]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n657hcfh5fdh7h559h545hb4h5fhd5hfdh669h5b7h587hfdh67fh59ch5ch7bhf7h658h54fh8chf7h5d5h68ch65bh68h67chbfh77h9bh585q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdpsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-699df9757c-22lpd_openstack(6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 31 05:41:42 crc kubenswrapper[5050]: > logger="UnhandledError" Jan 31 05:41:42 crc kubenswrapper[5050]: E0131 05:41:42.314024 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-699df9757c-22lpd" podUID="6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6" Jan 31 05:41:42 crc kubenswrapper[5050]: I0131 05:41:42.322985 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e05f0444-2e99-415b-9fb5-b309bb93518d","Type":"ContainerStarted","Data":"c9dfaf88eec01591d8fba0a88bb4906af9836ca0eb9b48a58c86f608090cdd3b"} Jan 31 05:41:42 crc kubenswrapper[5050]: I0131 05:41:42.326233 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a4951693-452d-4484-88cf-692f800e1160","Type":"ContainerStarted","Data":"f7b0fac36173054512399d213070981224442a2f4d92222f53ec697b747283b2"} Jan 31 05:41:42 crc kubenswrapper[5050]: I0131 05:41:42.330662 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0","Type":"ContainerStarted","Data":"efc374ebd56f42f37487a548173a66e3a13b6e3c1477de2e62425a33464c187f"} Jan 31 05:41:42 crc kubenswrapper[5050]: I0131 05:41:42.332092 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" event={"ID":"d544bf99-86ca-41e6-9b6d-c19906cbf426","Type":"ContainerStarted","Data":"7438ca9c95ef92181723b973549c93bfe09fc73cbf4dc80d3232fba41055f5bd"} Jan 31 05:41:42 crc kubenswrapper[5050]: I0131 05:41:42.837048 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 31 05:41:42 crc kubenswrapper[5050]: I0131 05:41:42.843995 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 05:41:42 crc kubenswrapper[5050]: I0131 05:41:42.854336 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 31 05:41:42 crc kubenswrapper[5050]: I0131 05:41:42.854506 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 31 05:41:42 crc kubenswrapper[5050]: I0131 05:41:42.855170 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-wvmt7" Jan 31 05:41:42 crc kubenswrapper[5050]: I0131 05:41:42.856192 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.024701 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d2d7ea2f-fab4-40a1-8624-275b760245fd-openstack-config\") pod \"openstackclient\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.024773 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d2d7ea2f-fab4-40a1-8624-275b760245fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.024851 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7ea2f-fab4-40a1-8624-275b760245fd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.024996 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2cz2\" (UniqueName: \"kubernetes.io/projected/d2d7ea2f-fab4-40a1-8624-275b760245fd-kube-api-access-c2cz2\") pod \"openstackclient\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.126765 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2cz2\" (UniqueName: \"kubernetes.io/projected/d2d7ea2f-fab4-40a1-8624-275b760245fd-kube-api-access-c2cz2\") pod \"openstackclient\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.126817 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d2d7ea2f-fab4-40a1-8624-275b760245fd-openstack-config\") pod \"openstackclient\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.126862 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d2d7ea2f-fab4-40a1-8624-275b760245fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.126893 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7ea2f-fab4-40a1-8624-275b760245fd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.127828 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d2d7ea2f-fab4-40a1-8624-275b760245fd-openstack-config\") pod \"openstackclient\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.134000 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d2d7ea2f-fab4-40a1-8624-275b760245fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.141554 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7ea2f-fab4-40a1-8624-275b760245fd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.152527 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2cz2\" (UniqueName: \"kubernetes.io/projected/d2d7ea2f-fab4-40a1-8624-275b760245fd-kube-api-access-c2cz2\") pod \"openstackclient\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.176664 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.236445 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.245697 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.258673 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.276049 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 31 05:41:43 crc kubenswrapper[5050]: E0131 05:41:43.276497 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6" containerName="init" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.276521 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6" containerName="init" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.276736 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6" containerName="init" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.277390 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.284540 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.339533 5050 generic.go:334] "Generic (PLEG): container finished" podID="d544bf99-86ca-41e6-9b6d-c19906cbf426" containerID="d7bdb3aeff041bb068acbf7609d0ee12898491e73e7ac99471d7ef894b8f0f38" exitCode=0 Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.339617 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" event={"ID":"d544bf99-86ca-41e6-9b6d-c19906cbf426","Type":"ContainerDied","Data":"d7bdb3aeff041bb068acbf7609d0ee12898491e73e7ac99471d7ef894b8f0f38"} Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.342115 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-22lpd" event={"ID":"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6","Type":"ContainerDied","Data":"9d19d0996285bd08857fa10bba7eeb0286b9807ff54013bec7705d89eaea7c3c"} Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.342168 5050 scope.go:117] "RemoveContainer" containerID="cb5fffaf0791c4e3822beacc47e8c794bea70de3e22f18109e590fbd11fcea88" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.342285 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-22lpd" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.432452 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdpsn\" (UniqueName: \"kubernetes.io/projected/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-kube-api-access-hdpsn\") pod \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.432545 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-ovsdbserver-sb\") pod \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.432563 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-config\") pod \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.432615 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-dns-svc\") pod \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.432646 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-ovsdbserver-nb\") pod \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\" (UID: \"6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6\") " Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.432963 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/58791cf1-4858-4849-9ada-2a41e6df553e-openstack-config\") pod \"openstackclient\" (UID: \"58791cf1-4858-4849-9ada-2a41e6df553e\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.433000 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll4gp\" (UniqueName: \"kubernetes.io/projected/58791cf1-4858-4849-9ada-2a41e6df553e-kube-api-access-ll4gp\") pod \"openstackclient\" (UID: \"58791cf1-4858-4849-9ada-2a41e6df553e\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.433106 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58791cf1-4858-4849-9ada-2a41e6df553e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"58791cf1-4858-4849-9ada-2a41e6df553e\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.433164 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/58791cf1-4858-4849-9ada-2a41e6df553e-openstack-config-secret\") pod \"openstackclient\" (UID: \"58791cf1-4858-4849-9ada-2a41e6df553e\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.436660 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-kube-api-access-hdpsn" (OuterVolumeSpecName: "kube-api-access-hdpsn") pod "6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6" (UID: "6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6"). InnerVolumeSpecName "kube-api-access-hdpsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.500733 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6" (UID: "6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.516035 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6" (UID: "6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.518177 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6" (UID: "6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.535508 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4gp\" (UniqueName: \"kubernetes.io/projected/58791cf1-4858-4849-9ada-2a41e6df553e-kube-api-access-ll4gp\") pod \"openstackclient\" (UID: \"58791cf1-4858-4849-9ada-2a41e6df553e\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.535709 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58791cf1-4858-4849-9ada-2a41e6df553e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"58791cf1-4858-4849-9ada-2a41e6df553e\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.535803 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/58791cf1-4858-4849-9ada-2a41e6df553e-openstack-config-secret\") pod \"openstackclient\" (UID: \"58791cf1-4858-4849-9ada-2a41e6df553e\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.535934 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/58791cf1-4858-4849-9ada-2a41e6df553e-openstack-config\") pod \"openstackclient\" (UID: \"58791cf1-4858-4849-9ada-2a41e6df553e\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.536002 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.536014 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.536024 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdpsn\" (UniqueName: \"kubernetes.io/projected/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-kube-api-access-hdpsn\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.536033 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.536806 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/58791cf1-4858-4849-9ada-2a41e6df553e-openstack-config\") pod \"openstackclient\" (UID: \"58791cf1-4858-4849-9ada-2a41e6df553e\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.541420 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-config" (OuterVolumeSpecName: "config") pod "6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6" (UID: "6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.541535 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58791cf1-4858-4849-9ada-2a41e6df553e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"58791cf1-4858-4849-9ada-2a41e6df553e\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.542198 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/58791cf1-4858-4849-9ada-2a41e6df553e-openstack-config-secret\") pod \"openstackclient\" (UID: \"58791cf1-4858-4849-9ada-2a41e6df553e\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.553862 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4gp\" (UniqueName: \"kubernetes.io/projected/58791cf1-4858-4849-9ada-2a41e6df553e-kube-api-access-ll4gp\") pod \"openstackclient\" (UID: \"58791cf1-4858-4849-9ada-2a41e6df553e\") " pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.594526 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.638277 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.713620 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-22lpd"] Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.719363 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-22lpd"] Jan 31 05:41:43 crc kubenswrapper[5050]: I0131 05:41:43.760805 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6" path="/var/lib/kubelet/pods/6a5ba8b3-64dc-4de1-9b4b-5a9aa392edb6/volumes" Jan 31 05:41:44 crc kubenswrapper[5050]: E0131 05:41:44.009466 5050 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 31 05:41:44 crc kubenswrapper[5050]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_d2d7ea2f-fab4-40a1-8624-275b760245fd_0(c786aaca133ee85c25ca514c2ece1f7be26703ad70af76478c825d36d50bbcf1): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c786aaca133ee85c25ca514c2ece1f7be26703ad70af76478c825d36d50bbcf1" Netns:"/var/run/netns/b304f1bb-c529-4fec-88b4-8f63e4460119" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=c786aaca133ee85c25ca514c2ece1f7be26703ad70af76478c825d36d50bbcf1;K8S_POD_UID=d2d7ea2f-fab4-40a1-8624-275b760245fd" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/d2d7ea2f-fab4-40a1-8624-275b760245fd]: expected pod UID "d2d7ea2f-fab4-40a1-8624-275b760245fd" but got "58791cf1-4858-4849-9ada-2a41e6df553e" from Kube API Jan 31 05:41:44 crc kubenswrapper[5050]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 05:41:44 crc kubenswrapper[5050]: > Jan 31 05:41:44 crc kubenswrapper[5050]: E0131 05:41:44.009748 5050 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 31 05:41:44 crc kubenswrapper[5050]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_d2d7ea2f-fab4-40a1-8624-275b760245fd_0(c786aaca133ee85c25ca514c2ece1f7be26703ad70af76478c825d36d50bbcf1): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c786aaca133ee85c25ca514c2ece1f7be26703ad70af76478c825d36d50bbcf1" Netns:"/var/run/netns/b304f1bb-c529-4fec-88b4-8f63e4460119" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=c786aaca133ee85c25ca514c2ece1f7be26703ad70af76478c825d36d50bbcf1;K8S_POD_UID=d2d7ea2f-fab4-40a1-8624-275b760245fd" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/d2d7ea2f-fab4-40a1-8624-275b760245fd]: expected pod UID "d2d7ea2f-fab4-40a1-8624-275b760245fd" but got "58791cf1-4858-4849-9ada-2a41e6df553e" from Kube API Jan 31 05:41:44 crc kubenswrapper[5050]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 05:41:44 crc kubenswrapper[5050]: > pod="openstack/openstackclient" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.353479 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e05f0444-2e99-415b-9fb5-b309bb93518d","Type":"ContainerStarted","Data":"d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954"} Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.355815 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d7f4bb587-ddb7l" event={"ID":"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b","Type":"ContainerStarted","Data":"7afc0dd7c394a18e1b84bac68c812a0a5b01622592dd3bbaabb835e79d21c397"} Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.359162 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0","Type":"ContainerStarted","Data":"e4074462d594465b50ae06f53cefe020315ab01e5e7ed5ecef494da2691f1347"} Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.363984 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" event={"ID":"d544bf99-86ca-41e6-9b6d-c19906cbf426","Type":"ContainerStarted","Data":"7c37ecb75da725901e892c9be86770ef45bf20a228bde915d4703c349e90fb3f"} Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.364116 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.366273 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" event={"ID":"8d8a8a39-709e-45ee-8694-2e648feebbae","Type":"ContainerStarted","Data":"601d9f58927f7161393d5b8a3345ed2baa95f917e9b1cd7dfef40998be811e42"} Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.394919 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.407897 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" podStartSLOduration=4.407878987 podStartE2EDuration="4.407878987s" podCreationTimestamp="2026-01-31 05:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:41:44.389309509 +0000 UTC m=+1229.438471105" watchObservedRunningTime="2026-01-31 05:41:44.407878987 +0000 UTC m=+1229.457040583" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.431382 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 05:41:44 crc kubenswrapper[5050]: W0131 05:41:44.477181 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58791cf1_4858_4849_9ada_2a41e6df553e.slice/crio-6034217df9174c470ceb0c543bbeff1ba9590aa21b159334baf22ed28047277a WatchSource:0}: Error finding container 6034217df9174c470ceb0c543bbeff1ba9590aa21b159334baf22ed28047277a: Status 404 returned error can't find the container with id 6034217df9174c470ceb0c543bbeff1ba9590aa21b159334baf22ed28047277a Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.559766 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.562786 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="d2d7ea2f-fab4-40a1-8624-275b760245fd" podUID="58791cf1-4858-4849-9ada-2a41e6df553e" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.662428 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2cz2\" (UniqueName: \"kubernetes.io/projected/d2d7ea2f-fab4-40a1-8624-275b760245fd-kube-api-access-c2cz2\") pod \"d2d7ea2f-fab4-40a1-8624-275b760245fd\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.662531 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7ea2f-fab4-40a1-8624-275b760245fd-combined-ca-bundle\") pod \"d2d7ea2f-fab4-40a1-8624-275b760245fd\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.662647 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d2d7ea2f-fab4-40a1-8624-275b760245fd-openstack-config\") pod \"d2d7ea2f-fab4-40a1-8624-275b760245fd\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.662671 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d2d7ea2f-fab4-40a1-8624-275b760245fd-openstack-config-secret\") pod \"d2d7ea2f-fab4-40a1-8624-275b760245fd\" (UID: \"d2d7ea2f-fab4-40a1-8624-275b760245fd\") " Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.663314 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d7ea2f-fab4-40a1-8624-275b760245fd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d2d7ea2f-fab4-40a1-8624-275b760245fd" (UID: "d2d7ea2f-fab4-40a1-8624-275b760245fd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.671025 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d7ea2f-fab4-40a1-8624-275b760245fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2d7ea2f-fab4-40a1-8624-275b760245fd" (UID: "d2d7ea2f-fab4-40a1-8624-275b760245fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.684867 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d7ea2f-fab4-40a1-8624-275b760245fd-kube-api-access-c2cz2" (OuterVolumeSpecName: "kube-api-access-c2cz2") pod "d2d7ea2f-fab4-40a1-8624-275b760245fd" (UID: "d2d7ea2f-fab4-40a1-8624-275b760245fd"). InnerVolumeSpecName "kube-api-access-c2cz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.685521 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d7ea2f-fab4-40a1-8624-275b760245fd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d2d7ea2f-fab4-40a1-8624-275b760245fd" (UID: "d2d7ea2f-fab4-40a1-8624-275b760245fd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.764460 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7ea2f-fab4-40a1-8624-275b760245fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.764492 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d2d7ea2f-fab4-40a1-8624-275b760245fd-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.764504 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d2d7ea2f-fab4-40a1-8624-275b760245fd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:44 crc kubenswrapper[5050]: I0131 05:41:44.764513 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2cz2\" (UniqueName: \"kubernetes.io/projected/d2d7ea2f-fab4-40a1-8624-275b760245fd-kube-api-access-c2cz2\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.155533 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.407646 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" event={"ID":"8d8a8a39-709e-45ee-8694-2e648feebbae","Type":"ContainerStarted","Data":"47b41f5f8e70537e513e030ae4e0edfcb5a71647f8104e44da5d8146833fdaab"} Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.412676 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"58791cf1-4858-4849-9ada-2a41e6df553e","Type":"ContainerStarted","Data":"6034217df9174c470ceb0c543bbeff1ba9590aa21b159334baf22ed28047277a"} Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.424255 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e05f0444-2e99-415b-9fb5-b309bb93518d","Type":"ContainerStarted","Data":"c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7"} Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.424409 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.430382 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a4951693-452d-4484-88cf-692f800e1160","Type":"ContainerStarted","Data":"8d3d44c75e9765a0741de4f78d82b41249cc57552f6951f9543ffdd9ceef8059"} Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.433924 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d7f4bb587-ddb7l" event={"ID":"4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b","Type":"ContainerStarted","Data":"8d1da56079b0d9f4ead38f3e84a42f04c9c41ff17ebcd15e6bc7385d5f752025"} Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.434011 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.445159 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-c797994c8-m9z4k" podStartSLOduration=3.835211528 podStartE2EDuration="7.445137625s" podCreationTimestamp="2026-01-31 05:41:38 +0000 UTC" firstStartedPulling="2026-01-31 05:41:40.237502389 +0000 UTC m=+1225.286663985" lastFinishedPulling="2026-01-31 05:41:43.847428486 +0000 UTC m=+1228.896590082" observedRunningTime="2026-01-31 05:41:45.43838333 +0000 UTC m=+1230.487544926" watchObservedRunningTime="2026-01-31 05:41:45.445137625 +0000 UTC m=+1230.494299221" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.474622 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.474604955 podStartE2EDuration="5.474604955s" podCreationTimestamp="2026-01-31 05:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:41:45.467360998 +0000 UTC m=+1230.516522594" watchObservedRunningTime="2026-01-31 05:41:45.474604955 +0000 UTC m=+1230.523766551" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.526557 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7d7f4bb587-ddb7l" podStartSLOduration=3.673730404 podStartE2EDuration="7.526537674s" podCreationTimestamp="2026-01-31 05:41:38 +0000 UTC" firstStartedPulling="2026-01-31 05:41:40.029735521 +0000 UTC m=+1225.078897117" lastFinishedPulling="2026-01-31 05:41:43.882542791 +0000 UTC m=+1228.931704387" observedRunningTime="2026-01-31 05:41:45.510972122 +0000 UTC m=+1230.560133718" watchObservedRunningTime="2026-01-31 05:41:45.526537674 +0000 UTC m=+1230.575699270" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.529228 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="d2d7ea2f-fab4-40a1-8624-275b760245fd" podUID="58791cf1-4858-4849-9ada-2a41e6df553e" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.750068 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d7ea2f-fab4-40a1-8624-275b760245fd" path="/var/lib/kubelet/pods/d2d7ea2f-fab4-40a1-8624-275b760245fd/volumes" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.899885 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7995457cdd-4p7kh"] Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.901611 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.903734 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.904741 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.941864 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7995457cdd-4p7kh"] Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.965374 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/844449d4-4111-40ad-9d23-dd9709c1a947-logs\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.965417 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrps2\" (UniqueName: \"kubernetes.io/projected/844449d4-4111-40ad-9d23-dd9709c1a947-kube-api-access-jrps2\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.965477 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-internal-tls-certs\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.965511 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-config-data\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.965532 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-combined-ca-bundle\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.965551 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-config-data-custom\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:45 crc kubenswrapper[5050]: I0131 05:41:45.965585 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-public-tls-certs\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.066533 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrps2\" (UniqueName: \"kubernetes.io/projected/844449d4-4111-40ad-9d23-dd9709c1a947-kube-api-access-jrps2\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.066599 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-internal-tls-certs\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.066620 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-config-data\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.066639 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-combined-ca-bundle\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.066652 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-config-data-custom\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.066676 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-public-tls-certs\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.066787 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/844449d4-4111-40ad-9d23-dd9709c1a947-logs\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.067202 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/844449d4-4111-40ad-9d23-dd9709c1a947-logs\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.072368 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-combined-ca-bundle\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.083503 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-internal-tls-certs\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.084752 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-config-data-custom\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.089530 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-public-tls-certs\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.097680 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844449d4-4111-40ad-9d23-dd9709c1a947-config-data\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.098345 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrps2\" (UniqueName: \"kubernetes.io/projected/844449d4-4111-40ad-9d23-dd9709c1a947-kube-api-access-jrps2\") pod \"barbican-api-7995457cdd-4p7kh\" (UID: \"844449d4-4111-40ad-9d23-dd9709c1a947\") " pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.214312 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.502364 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a4951693-452d-4484-88cf-692f800e1160","Type":"ContainerStarted","Data":"eb4e689eb68a955994fa963cf3f7945be7325e28479a2c8e50ee1dab06e36bfa"} Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.502534 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e05f0444-2e99-415b-9fb5-b309bb93518d" containerName="cinder-api-log" containerID="cri-o://d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954" gracePeriod=30 Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.502764 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e05f0444-2e99-415b-9fb5-b309bb93518d" containerName="cinder-api" containerID="cri-o://c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7" gracePeriod=30 Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.527877 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.840262262 podStartE2EDuration="6.525853293s" podCreationTimestamp="2026-01-31 05:41:40 +0000 UTC" firstStartedPulling="2026-01-31 05:41:41.647331884 +0000 UTC m=+1226.696493480" lastFinishedPulling="2026-01-31 05:41:44.332922915 +0000 UTC m=+1229.382084511" observedRunningTime="2026-01-31 05:41:46.521944432 +0000 UTC m=+1231.571106028" watchObservedRunningTime="2026-01-31 05:41:46.525853293 +0000 UTC m=+1231.575014889" Jan 31 05:41:46 crc kubenswrapper[5050]: I0131 05:41:46.799745 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7995457cdd-4p7kh"] Jan 31 05:41:46 crc kubenswrapper[5050]: W0131 05:41:46.847339 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod844449d4_4111_40ad_9d23_dd9709c1a947.slice/crio-bf3fda7e388bc2608dec1aa74a6f4ac7fcadf9fa6a0cb04d7bccc8340cc1246c WatchSource:0}: Error finding container bf3fda7e388bc2608dec1aa74a6f4ac7fcadf9fa6a0cb04d7bccc8340cc1246c: Status 404 returned error can't find the container with id bf3fda7e388bc2608dec1aa74a6f4ac7fcadf9fa6a0cb04d7bccc8340cc1246c Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.408676 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.523586 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7995457cdd-4p7kh" event={"ID":"844449d4-4111-40ad-9d23-dd9709c1a947","Type":"ContainerStarted","Data":"c2075de9a0771303a9260425dbf35aca1cd7a33a4eb8f505c929fc857b36a75b"} Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.524178 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7995457cdd-4p7kh" event={"ID":"844449d4-4111-40ad-9d23-dd9709c1a947","Type":"ContainerStarted","Data":"671e7f42922e73e225975ddecec80bdcf24da40a3f9401639468cecb67a687f6"} Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.524271 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7995457cdd-4p7kh" event={"ID":"844449d4-4111-40ad-9d23-dd9709c1a947","Type":"ContainerStarted","Data":"bf3fda7e388bc2608dec1aa74a6f4ac7fcadf9fa6a0cb04d7bccc8340cc1246c"} Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.524809 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.524843 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.528348 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e05f0444-2e99-415b-9fb5-b309bb93518d-etc-machine-id\") pod \"e05f0444-2e99-415b-9fb5-b309bb93518d\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.528398 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e05f0444-2e99-415b-9fb5-b309bb93518d-logs\") pod \"e05f0444-2e99-415b-9fb5-b309bb93518d\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.528447 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-scripts\") pod \"e05f0444-2e99-415b-9fb5-b309bb93518d\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.528478 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e05f0444-2e99-415b-9fb5-b309bb93518d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e05f0444-2e99-415b-9fb5-b309bb93518d" (UID: "e05f0444-2e99-415b-9fb5-b309bb93518d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.528530 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-config-data-custom\") pod \"e05f0444-2e99-415b-9fb5-b309bb93518d\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.528583 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-config-data\") pod \"e05f0444-2e99-415b-9fb5-b309bb93518d\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.528668 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-combined-ca-bundle\") pod \"e05f0444-2e99-415b-9fb5-b309bb93518d\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.528691 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2rvh\" (UniqueName: \"kubernetes.io/projected/e05f0444-2e99-415b-9fb5-b309bb93518d-kube-api-access-x2rvh\") pod \"e05f0444-2e99-415b-9fb5-b309bb93518d\" (UID: \"e05f0444-2e99-415b-9fb5-b309bb93518d\") " Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.528933 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e05f0444-2e99-415b-9fb5-b309bb93518d-logs" (OuterVolumeSpecName: "logs") pod "e05f0444-2e99-415b-9fb5-b309bb93518d" (UID: "e05f0444-2e99-415b-9fb5-b309bb93518d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.529009 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e05f0444-2e99-415b-9fb5-b309bb93518d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.534455 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e05f0444-2e99-415b-9fb5-b309bb93518d" (UID: "e05f0444-2e99-415b-9fb5-b309bb93518d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.534903 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e05f0444-2e99-415b-9fb5-b309bb93518d-kube-api-access-x2rvh" (OuterVolumeSpecName: "kube-api-access-x2rvh") pod "e05f0444-2e99-415b-9fb5-b309bb93518d" (UID: "e05f0444-2e99-415b-9fb5-b309bb93518d"). InnerVolumeSpecName "kube-api-access-x2rvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.538280 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-scripts" (OuterVolumeSpecName: "scripts") pod "e05f0444-2e99-415b-9fb5-b309bb93518d" (UID: "e05f0444-2e99-415b-9fb5-b309bb93518d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.540613 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0","Type":"ContainerStarted","Data":"d14d374837d3d4622c06f3d64c13e8c54f61e743aeaee782c8634d4f2fed8865"} Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.541777 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.565943 5050 generic.go:334] "Generic (PLEG): container finished" podID="e05f0444-2e99-415b-9fb5-b309bb93518d" containerID="c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7" exitCode=0 Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.565989 5050 generic.go:334] "Generic (PLEG): container finished" podID="e05f0444-2e99-415b-9fb5-b309bb93518d" containerID="d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954" exitCode=143 Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.566197 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e05f0444-2e99-415b-9fb5-b309bb93518d","Type":"ContainerDied","Data":"c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7"} Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.566224 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e05f0444-2e99-415b-9fb5-b309bb93518d","Type":"ContainerDied","Data":"d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954"} Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.566234 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e05f0444-2e99-415b-9fb5-b309bb93518d","Type":"ContainerDied","Data":"c9dfaf88eec01591d8fba0a88bb4906af9836ca0eb9b48a58c86f608090cdd3b"} Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.566233 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.566252 5050 scope.go:117] "RemoveContainer" containerID="c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.571162 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e05f0444-2e99-415b-9fb5-b309bb93518d" (UID: "e05f0444-2e99-415b-9fb5-b309bb93518d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.594350 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-config-data" (OuterVolumeSpecName: "config-data") pod "e05f0444-2e99-415b-9fb5-b309bb93518d" (UID: "e05f0444-2e99-415b-9fb5-b309bb93518d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.595005 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.15663978 podStartE2EDuration="9.594988732s" podCreationTimestamp="2026-01-31 05:41:38 +0000 UTC" firstStartedPulling="2026-01-31 05:41:40.228671721 +0000 UTC m=+1225.277833317" lastFinishedPulling="2026-01-31 05:41:46.667020673 +0000 UTC m=+1231.716182269" observedRunningTime="2026-01-31 05:41:47.582472789 +0000 UTC m=+1232.631634385" watchObservedRunningTime="2026-01-31 05:41:47.594988732 +0000 UTC m=+1232.644150328" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.595116 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7995457cdd-4p7kh" podStartSLOduration=2.595112325 podStartE2EDuration="2.595112325s" podCreationTimestamp="2026-01-31 05:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:41:47.556257923 +0000 UTC m=+1232.605419519" watchObservedRunningTime="2026-01-31 05:41:47.595112325 +0000 UTC m=+1232.644273921" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.601913 5050 scope.go:117] "RemoveContainer" containerID="d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.617902 5050 scope.go:117] "RemoveContainer" containerID="c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7" Jan 31 05:41:47 crc kubenswrapper[5050]: E0131 05:41:47.622564 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7\": container with ID starting with c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7 not found: ID does not exist" containerID="c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.622611 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7"} err="failed to get container status \"c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7\": rpc error: code = NotFound desc = could not find container \"c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7\": container with ID starting with c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7 not found: ID does not exist" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.622641 5050 scope.go:117] "RemoveContainer" containerID="d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954" Jan 31 05:41:47 crc kubenswrapper[5050]: E0131 05:41:47.623098 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954\": container with ID starting with d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954 not found: ID does not exist" containerID="d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.623117 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954"} err="failed to get container status \"d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954\": rpc error: code = NotFound desc = could not find container \"d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954\": container with ID starting with d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954 not found: ID does not exist" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.623130 5050 scope.go:117] "RemoveContainer" containerID="c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.623550 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7"} err="failed to get container status \"c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7\": rpc error: code = NotFound desc = could not find container \"c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7\": container with ID starting with c4a6c6b38b8a2e7f416f71fe1cbb5518f5aa6c61d2b4e5f9d4ce51ef57d5f6d7 not found: ID does not exist" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.623568 5050 scope.go:117] "RemoveContainer" containerID="d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.623792 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954"} err="failed to get container status \"d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954\": rpc error: code = NotFound desc = could not find container \"d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954\": container with ID starting with d27c21de66448fbca867f75e058babfa5b2d45e4af30708446243fbea164c954 not found: ID does not exist" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.631214 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.631236 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2rvh\" (UniqueName: \"kubernetes.io/projected/e05f0444-2e99-415b-9fb5-b309bb93518d-kube-api-access-x2rvh\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.631247 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e05f0444-2e99-415b-9fb5-b309bb93518d-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.631255 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.631263 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.631271 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e05f0444-2e99-415b-9fb5-b309bb93518d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.886528 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.892499 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.909304 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:41:47 crc kubenswrapper[5050]: E0131 05:41:47.910109 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05f0444-2e99-415b-9fb5-b309bb93518d" containerName="cinder-api" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.910209 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05f0444-2e99-415b-9fb5-b309bb93518d" containerName="cinder-api" Jan 31 05:41:47 crc kubenswrapper[5050]: E0131 05:41:47.910288 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05f0444-2e99-415b-9fb5-b309bb93518d" containerName="cinder-api-log" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.910341 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05f0444-2e99-415b-9fb5-b309bb93518d" containerName="cinder-api-log" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.910534 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e05f0444-2e99-415b-9fb5-b309bb93518d" containerName="cinder-api-log" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.910614 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e05f0444-2e99-415b-9fb5-b309bb93518d" containerName="cinder-api" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.911571 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.914897 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.915140 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.915272 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 31 05:41:47 crc kubenswrapper[5050]: I0131 05:41:47.934688 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.037605 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.037668 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5086536b-eaed-44e4-8951-ee45e91f09e4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.037689 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-config-data\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.037752 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgwqx\" (UniqueName: \"kubernetes.io/projected/5086536b-eaed-44e4-8951-ee45e91f09e4-kube-api-access-fgwqx\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.038033 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.038142 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-scripts\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.038198 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-config-data-custom\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.038217 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5086536b-eaed-44e4-8951-ee45e91f09e4-logs\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.038281 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.139990 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-scripts\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.140038 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-config-data-custom\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.140057 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5086536b-eaed-44e4-8951-ee45e91f09e4-logs\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.140079 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.140125 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.140165 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5086536b-eaed-44e4-8951-ee45e91f09e4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.140181 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-config-data\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.140216 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgwqx\" (UniqueName: \"kubernetes.io/projected/5086536b-eaed-44e4-8951-ee45e91f09e4-kube-api-access-fgwqx\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.140261 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.140717 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5086536b-eaed-44e4-8951-ee45e91f09e4-logs\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.140800 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5086536b-eaed-44e4-8951-ee45e91f09e4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.144893 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.145119 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-config-data-custom\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.146775 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.147169 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-config-data\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.155494 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-scripts\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.155928 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5086536b-eaed-44e4-8951-ee45e91f09e4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.157631 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgwqx\" (UniqueName: \"kubernetes.io/projected/5086536b-eaed-44e4-8951-ee45e91f09e4-kube-api-access-fgwqx\") pod \"cinder-api-0\" (UID: \"5086536b-eaed-44e4-8951-ee45e91f09e4\") " pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.235531 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 05:41:48 crc kubenswrapper[5050]: I0131 05:41:48.813755 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:41:48 crc kubenswrapper[5050]: W0131 05:41:48.821974 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5086536b_eaed_44e4_8951_ee45e91f09e4.slice/crio-95fb29b544d5057349f87e5cfacc864617ad8cbac9d71c65f1fb8c2e60f76f37 WatchSource:0}: Error finding container 95fb29b544d5057349f87e5cfacc864617ad8cbac9d71c65f1fb8c2e60f76f37: Status 404 returned error can't find the container with id 95fb29b544d5057349f87e5cfacc864617ad8cbac9d71c65f1fb8c2e60f76f37 Jan 31 05:41:49 crc kubenswrapper[5050]: I0131 05:41:49.587833 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5086536b-eaed-44e4-8951-ee45e91f09e4","Type":"ContainerStarted","Data":"9b9ed31e1e374bb28efff170c5124e353c12bd20cc24f2ace015b408409b20b8"} Jan 31 05:41:49 crc kubenswrapper[5050]: I0131 05:41:49.587868 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5086536b-eaed-44e4-8951-ee45e91f09e4","Type":"ContainerStarted","Data":"95fb29b544d5057349f87e5cfacc864617ad8cbac9d71c65f1fb8c2e60f76f37"} Jan 31 05:41:49 crc kubenswrapper[5050]: I0131 05:41:49.748207 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e05f0444-2e99-415b-9fb5-b309bb93518d" path="/var/lib/kubelet/pods/e05f0444-2e99-415b-9fb5-b309bb93518d/volumes" Jan 31 05:41:50 crc kubenswrapper[5050]: I0131 05:41:50.607074 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5086536b-eaed-44e4-8951-ee45e91f09e4","Type":"ContainerStarted","Data":"769145aabb70fab6617bd7178e333613b7af94f6e9f72b7cd49270d1f74c6c95"} Jan 31 05:41:50 crc kubenswrapper[5050]: I0131 05:41:50.607304 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 05:41:50 crc kubenswrapper[5050]: I0131 05:41:50.632337 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.6323203040000003 podStartE2EDuration="3.632320304s" podCreationTimestamp="2026-01-31 05:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:41:50.628551701 +0000 UTC m=+1235.677713297" watchObservedRunningTime="2026-01-31 05:41:50.632320304 +0000 UTC m=+1235.681481900" Jan 31 05:41:50 crc kubenswrapper[5050]: I0131 05:41:50.905637 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:50 crc kubenswrapper[5050]: I0131 05:41:50.919977 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.002871 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.073149 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.167240 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-dml6c"] Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.169334 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" podUID="2d4e46ab-29a5-409d-977b-3c92880d4f62" containerName="dnsmasq-dns" containerID="cri-o://c1bda78eaf69c98db29e69077ed67d9a60cab756becd6bd3de755df17c870d7d" gracePeriod=10 Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.263201 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.626779 5050 generic.go:334] "Generic (PLEG): container finished" podID="2d4e46ab-29a5-409d-977b-3c92880d4f62" containerID="c1bda78eaf69c98db29e69077ed67d9a60cab756becd6bd3de755df17c870d7d" exitCode=0 Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.626986 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" event={"ID":"2d4e46ab-29a5-409d-977b-3c92880d4f62","Type":"ContainerDied","Data":"c1bda78eaf69c98db29e69077ed67d9a60cab756becd6bd3de755df17c870d7d"} Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.627045 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" event={"ID":"2d4e46ab-29a5-409d-977b-3c92880d4f62","Type":"ContainerDied","Data":"d11f9c67dcf290f4c5409c44c7f19f1d56b5269ac829736bc1671b1d4a35d523"} Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.627063 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d11f9c67dcf290f4c5409c44c7f19f1d56b5269ac829736bc1671b1d4a35d523" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.655493 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.657903 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.806232 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-config\") pod \"2d4e46ab-29a5-409d-977b-3c92880d4f62\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.806274 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-ovsdbserver-nb\") pod \"2d4e46ab-29a5-409d-977b-3c92880d4f62\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.806369 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-dns-svc\") pod \"2d4e46ab-29a5-409d-977b-3c92880d4f62\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.806500 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zssdp\" (UniqueName: \"kubernetes.io/projected/2d4e46ab-29a5-409d-977b-3c92880d4f62-kube-api-access-zssdp\") pod \"2d4e46ab-29a5-409d-977b-3c92880d4f62\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.806565 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-ovsdbserver-sb\") pod \"2d4e46ab-29a5-409d-977b-3c92880d4f62\" (UID: \"2d4e46ab-29a5-409d-977b-3c92880d4f62\") " Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.819146 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4e46ab-29a5-409d-977b-3c92880d4f62-kube-api-access-zssdp" (OuterVolumeSpecName: "kube-api-access-zssdp") pod "2d4e46ab-29a5-409d-977b-3c92880d4f62" (UID: "2d4e46ab-29a5-409d-977b-3c92880d4f62"). InnerVolumeSpecName "kube-api-access-zssdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.850389 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-config" (OuterVolumeSpecName: "config") pod "2d4e46ab-29a5-409d-977b-3c92880d4f62" (UID: "2d4e46ab-29a5-409d-977b-3c92880d4f62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.862029 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d4e46ab-29a5-409d-977b-3c92880d4f62" (UID: "2d4e46ab-29a5-409d-977b-3c92880d4f62"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.876791 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d4e46ab-29a5-409d-977b-3c92880d4f62" (UID: "2d4e46ab-29a5-409d-977b-3c92880d4f62"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.881348 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d4e46ab-29a5-409d-977b-3c92880d4f62" (UID: "2d4e46ab-29a5-409d-977b-3c92880d4f62"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.912679 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.912722 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.912736 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.912747 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zssdp\" (UniqueName: \"kubernetes.io/projected/2d4e46ab-29a5-409d-977b-3c92880d4f62-kube-api-access-zssdp\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:51 crc kubenswrapper[5050]: I0131 05:41:51.912763 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d4e46ab-29a5-409d-977b-3c92880d4f62-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:52 crc kubenswrapper[5050]: I0131 05:41:52.637396 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-dml6c" Jan 31 05:41:52 crc kubenswrapper[5050]: I0131 05:41:52.637566 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a4951693-452d-4484-88cf-692f800e1160" containerName="cinder-scheduler" containerID="cri-o://8d3d44c75e9765a0741de4f78d82b41249cc57552f6951f9543ffdd9ceef8059" gracePeriod=30 Jan 31 05:41:52 crc kubenswrapper[5050]: I0131 05:41:52.637693 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a4951693-452d-4484-88cf-692f800e1160" containerName="probe" containerID="cri-o://eb4e689eb68a955994fa963cf3f7945be7325e28479a2c8e50ee1dab06e36bfa" gracePeriod=30 Jan 31 05:41:52 crc kubenswrapper[5050]: I0131 05:41:52.676786 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-dml6c"] Jan 31 05:41:52 crc kubenswrapper[5050]: I0131 05:41:52.688702 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-dml6c"] Jan 31 05:41:53 crc kubenswrapper[5050]: I0131 05:41:53.651134 5050 generic.go:334] "Generic (PLEG): container finished" podID="c9f82d6b-5e75-48cd-b642-55d3fa91f520" containerID="85daf693e8813572df891be894023528332c01b59f46c8ac34c40beb6704cb7e" exitCode=0 Jan 31 05:41:53 crc kubenswrapper[5050]: I0131 05:41:53.651223 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4kpps" event={"ID":"c9f82d6b-5e75-48cd-b642-55d3fa91f520","Type":"ContainerDied","Data":"85daf693e8813572df891be894023528332c01b59f46c8ac34c40beb6704cb7e"} Jan 31 05:41:53 crc kubenswrapper[5050]: I0131 05:41:53.654215 5050 generic.go:334] "Generic (PLEG): container finished" podID="a4951693-452d-4484-88cf-692f800e1160" containerID="eb4e689eb68a955994fa963cf3f7945be7325e28479a2c8e50ee1dab06e36bfa" exitCode=0 Jan 31 05:41:53 crc kubenswrapper[5050]: I0131 05:41:53.654264 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a4951693-452d-4484-88cf-692f800e1160","Type":"ContainerDied","Data":"eb4e689eb68a955994fa963cf3f7945be7325e28479a2c8e50ee1dab06e36bfa"} Jan 31 05:41:53 crc kubenswrapper[5050]: I0131 05:41:53.746506 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d4e46ab-29a5-409d-977b-3c92880d4f62" path="/var/lib/kubelet/pods/2d4e46ab-29a5-409d-977b-3c92880d4f62/volumes" Jan 31 05:41:54 crc kubenswrapper[5050]: I0131 05:41:54.127719 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:41:54 crc kubenswrapper[5050]: I0131 05:41:54.128227 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="ceilometer-central-agent" containerID="cri-o://4193f76852747704db40eea62cf50bc6746bcb598b82a8df745e617fa1401e5d" gracePeriod=30 Jan 31 05:41:54 crc kubenswrapper[5050]: I0131 05:41:54.128345 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="ceilometer-notification-agent" containerID="cri-o://efc374ebd56f42f37487a548173a66e3a13b6e3c1477de2e62425a33464c187f" gracePeriod=30 Jan 31 05:41:54 crc kubenswrapper[5050]: I0131 05:41:54.128379 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="sg-core" containerID="cri-o://e4074462d594465b50ae06f53cefe020315ab01e5e7ed5ecef494da2691f1347" gracePeriod=30 Jan 31 05:41:54 crc kubenswrapper[5050]: I0131 05:41:54.128576 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="proxy-httpd" containerID="cri-o://d14d374837d3d4622c06f3d64c13e8c54f61e743aeaee782c8634d4f2fed8865" gracePeriod=30 Jan 31 05:41:54 crc kubenswrapper[5050]: I0131 05:41:54.682140 5050 generic.go:334] "Generic (PLEG): container finished" podID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerID="d14d374837d3d4622c06f3d64c13e8c54f61e743aeaee782c8634d4f2fed8865" exitCode=0 Jan 31 05:41:54 crc kubenswrapper[5050]: I0131 05:41:54.682178 5050 generic.go:334] "Generic (PLEG): container finished" podID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerID="e4074462d594465b50ae06f53cefe020315ab01e5e7ed5ecef494da2691f1347" exitCode=2 Jan 31 05:41:54 crc kubenswrapper[5050]: I0131 05:41:54.682189 5050 generic.go:334] "Generic (PLEG): container finished" podID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerID="4193f76852747704db40eea62cf50bc6746bcb598b82a8df745e617fa1401e5d" exitCode=0 Jan 31 05:41:54 crc kubenswrapper[5050]: I0131 05:41:54.682177 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0","Type":"ContainerDied","Data":"d14d374837d3d4622c06f3d64c13e8c54f61e743aeaee782c8634d4f2fed8865"} Jan 31 05:41:54 crc kubenswrapper[5050]: I0131 05:41:54.682420 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0","Type":"ContainerDied","Data":"e4074462d594465b50ae06f53cefe020315ab01e5e7ed5ecef494da2691f1347"} Jan 31 05:41:54 crc kubenswrapper[5050]: I0131 05:41:54.682437 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0","Type":"ContainerDied","Data":"4193f76852747704db40eea62cf50bc6746bcb598b82a8df745e617fa1401e5d"} Jan 31 05:41:55 crc kubenswrapper[5050]: I0131 05:41:55.693245 5050 generic.go:334] "Generic (PLEG): container finished" podID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerID="efc374ebd56f42f37487a548173a66e3a13b6e3c1477de2e62425a33464c187f" exitCode=0 Jan 31 05:41:55 crc kubenswrapper[5050]: I0131 05:41:55.693589 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0","Type":"ContainerDied","Data":"efc374ebd56f42f37487a548173a66e3a13b6e3c1477de2e62425a33464c187f"} Jan 31 05:41:57 crc kubenswrapper[5050]: I0131 05:41:57.749866 5050 generic.go:334] "Generic (PLEG): container finished" podID="a4951693-452d-4484-88cf-692f800e1160" containerID="8d3d44c75e9765a0741de4f78d82b41249cc57552f6951f9543ffdd9ceef8059" exitCode=0 Jan 31 05:41:57 crc kubenswrapper[5050]: I0131 05:41:57.749994 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a4951693-452d-4484-88cf-692f800e1160","Type":"ContainerDied","Data":"8d3d44c75e9765a0741de4f78d82b41249cc57552f6951f9543ffdd9ceef8059"} Jan 31 05:41:57 crc kubenswrapper[5050]: I0131 05:41:57.768326 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:57 crc kubenswrapper[5050]: I0131 05:41:57.792524 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7995457cdd-4p7kh" Jan 31 05:41:57 crc kubenswrapper[5050]: I0131 05:41:57.838652 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-648cf7894d-hsztl"] Jan 31 05:41:57 crc kubenswrapper[5050]: I0131 05:41:57.838876 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-648cf7894d-hsztl" podUID="2fce74dc-b894-413b-85d2-0b16ab6808e1" containerName="barbican-api-log" containerID="cri-o://d0380ac7192163b25a8431d9a7dccddfdeaf903fe2ba8746c1c92276876c0d63" gracePeriod=30 Jan 31 05:41:57 crc kubenswrapper[5050]: I0131 05:41:57.839436 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-648cf7894d-hsztl" podUID="2fce74dc-b894-413b-85d2-0b16ab6808e1" containerName="barbican-api" containerID="cri-o://2a2a0fa4a8bf6bf0b27ec912f5e73042d6714655f74665522e07fe6a702b299e" gracePeriod=30 Jan 31 05:41:58 crc kubenswrapper[5050]: I0131 05:41:58.760404 5050 generic.go:334] "Generic (PLEG): container finished" podID="2fce74dc-b894-413b-85d2-0b16ab6808e1" containerID="d0380ac7192163b25a8431d9a7dccddfdeaf903fe2ba8746c1c92276876c0d63" exitCode=143 Jan 31 05:41:58 crc kubenswrapper[5050]: I0131 05:41:58.760585 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-648cf7894d-hsztl" event={"ID":"2fce74dc-b894-413b-85d2-0b16ab6808e1","Type":"ContainerDied","Data":"d0380ac7192163b25a8431d9a7dccddfdeaf903fe2ba8746c1c92276876c0d63"} Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.016522 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4kpps" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.141743 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.184632 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv5w6\" (UniqueName: \"kubernetes.io/projected/c9f82d6b-5e75-48cd-b642-55d3fa91f520-kube-api-access-fv5w6\") pod \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\" (UID: \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.184944 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f82d6b-5e75-48cd-b642-55d3fa91f520-combined-ca-bundle\") pod \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\" (UID: \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.184992 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9f82d6b-5e75-48cd-b642-55d3fa91f520-config\") pod \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\" (UID: \"c9f82d6b-5e75-48cd-b642-55d3fa91f520\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.195603 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f82d6b-5e75-48cd-b642-55d3fa91f520-kube-api-access-fv5w6" (OuterVolumeSpecName: "kube-api-access-fv5w6") pod "c9f82d6b-5e75-48cd-b642-55d3fa91f520" (UID: "c9f82d6b-5e75-48cd-b642-55d3fa91f520"). InnerVolumeSpecName "kube-api-access-fv5w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.254334 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f82d6b-5e75-48cd-b642-55d3fa91f520-config" (OuterVolumeSpecName: "config") pod "c9f82d6b-5e75-48cd-b642-55d3fa91f520" (UID: "c9f82d6b-5e75-48cd-b642-55d3fa91f520"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.254647 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f82d6b-5e75-48cd-b642-55d3fa91f520-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9f82d6b-5e75-48cd-b642-55d3fa91f520" (UID: "c9f82d6b-5e75-48cd-b642-55d3fa91f520"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.287159 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv5w6\" (UniqueName: \"kubernetes.io/projected/c9f82d6b-5e75-48cd-b642-55d3fa91f520-kube-api-access-fv5w6\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.287374 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f82d6b-5e75-48cd-b642-55d3fa91f520-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.287434 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9f82d6b-5e75-48cd-b642-55d3fa91f520-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.320512 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.383302 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490170 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-config-data\") pod \"a4951693-452d-4484-88cf-692f800e1160\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490254 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-config-data-custom\") pod \"a4951693-452d-4484-88cf-692f800e1160\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490293 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-config-data\") pod \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490323 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-sg-core-conf-yaml\") pod \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490345 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6wv9\" (UniqueName: \"kubernetes.io/projected/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-kube-api-access-x6wv9\") pod \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490382 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-scripts\") pod \"a4951693-452d-4484-88cf-692f800e1160\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490467 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-scripts\") pod \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490485 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftd6j\" (UniqueName: \"kubernetes.io/projected/a4951693-452d-4484-88cf-692f800e1160-kube-api-access-ftd6j\") pod \"a4951693-452d-4484-88cf-692f800e1160\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490521 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-log-httpd\") pod \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490540 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-combined-ca-bundle\") pod \"a4951693-452d-4484-88cf-692f800e1160\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490555 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4951693-452d-4484-88cf-692f800e1160-etc-machine-id\") pod \"a4951693-452d-4484-88cf-692f800e1160\" (UID: \"a4951693-452d-4484-88cf-692f800e1160\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490581 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-run-httpd\") pod \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.490595 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-combined-ca-bundle\") pod \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\" (UID: \"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0\") " Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.492190 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" (UID: "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.492185 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4951693-452d-4484-88cf-692f800e1160-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a4951693-452d-4484-88cf-692f800e1160" (UID: "a4951693-452d-4484-88cf-692f800e1160"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.492603 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" (UID: "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.495166 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a4951693-452d-4484-88cf-692f800e1160" (UID: "a4951693-452d-4484-88cf-692f800e1160"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.496006 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-scripts" (OuterVolumeSpecName: "scripts") pod "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" (UID: "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.496189 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-kube-api-access-x6wv9" (OuterVolumeSpecName: "kube-api-access-x6wv9") pod "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" (UID: "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0"). InnerVolumeSpecName "kube-api-access-x6wv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.496759 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-scripts" (OuterVolumeSpecName: "scripts") pod "a4951693-452d-4484-88cf-692f800e1160" (UID: "a4951693-452d-4484-88cf-692f800e1160"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.497534 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4951693-452d-4484-88cf-692f800e1160-kube-api-access-ftd6j" (OuterVolumeSpecName: "kube-api-access-ftd6j") pod "a4951693-452d-4484-88cf-692f800e1160" (UID: "a4951693-452d-4484-88cf-692f800e1160"). InnerVolumeSpecName "kube-api-access-ftd6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.531180 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" (UID: "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.544115 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4951693-452d-4484-88cf-692f800e1160" (UID: "a4951693-452d-4484-88cf-692f800e1160"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.569020 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" (UID: "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.592721 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.592759 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftd6j\" (UniqueName: \"kubernetes.io/projected/a4951693-452d-4484-88cf-692f800e1160-kube-api-access-ftd6j\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.592776 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.592790 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.592802 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4951693-452d-4484-88cf-692f800e1160-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.592814 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.592826 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.592838 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.592849 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.592859 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6wv9\" (UniqueName: \"kubernetes.io/projected/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-kube-api-access-x6wv9\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.592871 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.625696 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-config-data" (OuterVolumeSpecName: "config-data") pod "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" (UID: "6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.649079 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-config-data" (OuterVolumeSpecName: "config-data") pod "a4951693-452d-4484-88cf-692f800e1160" (UID: "a4951693-452d-4484-88cf-692f800e1160"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.694292 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4951693-452d-4484-88cf-692f800e1160-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.694332 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.744346 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-zxbwv"] Jan 31 05:42:00 crc kubenswrapper[5050]: E0131 05:42:00.744703 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4951693-452d-4484-88cf-692f800e1160" containerName="probe" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.744719 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4951693-452d-4484-88cf-692f800e1160" containerName="probe" Jan 31 05:42:00 crc kubenswrapper[5050]: E0131 05:42:00.744737 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="sg-core" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.744744 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="sg-core" Jan 31 05:42:00 crc kubenswrapper[5050]: E0131 05:42:00.744759 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4e46ab-29a5-409d-977b-3c92880d4f62" containerName="init" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.744764 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4e46ab-29a5-409d-977b-3c92880d4f62" containerName="init" Jan 31 05:42:00 crc kubenswrapper[5050]: E0131 05:42:00.744780 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="ceilometer-central-agent" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.744787 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="ceilometer-central-agent" Jan 31 05:42:00 crc kubenswrapper[5050]: E0131 05:42:00.744801 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f82d6b-5e75-48cd-b642-55d3fa91f520" containerName="neutron-db-sync" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.744806 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f82d6b-5e75-48cd-b642-55d3fa91f520" containerName="neutron-db-sync" Jan 31 05:42:00 crc kubenswrapper[5050]: E0131 05:42:00.744815 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4951693-452d-4484-88cf-692f800e1160" containerName="cinder-scheduler" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.744820 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4951693-452d-4484-88cf-692f800e1160" containerName="cinder-scheduler" Jan 31 05:42:00 crc kubenswrapper[5050]: E0131 05:42:00.744826 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4e46ab-29a5-409d-977b-3c92880d4f62" containerName="dnsmasq-dns" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.744832 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4e46ab-29a5-409d-977b-3c92880d4f62" containerName="dnsmasq-dns" Jan 31 05:42:00 crc kubenswrapper[5050]: E0131 05:42:00.744850 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="ceilometer-notification-agent" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.744857 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="ceilometer-notification-agent" Jan 31 05:42:00 crc kubenswrapper[5050]: E0131 05:42:00.744866 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="proxy-httpd" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.744871 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="proxy-httpd" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.745014 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="sg-core" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.745023 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="ceilometer-central-agent" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.745034 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="ceilometer-notification-agent" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.745043 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4951693-452d-4484-88cf-692f800e1160" containerName="probe" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.745054 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d4e46ab-29a5-409d-977b-3c92880d4f62" containerName="dnsmasq-dns" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.745065 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f82d6b-5e75-48cd-b642-55d3fa91f520" containerName="neutron-db-sync" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.745073 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" containerName="proxy-httpd" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.745082 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4951693-452d-4484-88cf-692f800e1160" containerName="cinder-scheduler" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.745604 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zxbwv" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.754607 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zxbwv"] Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.777242 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a4951693-452d-4484-88cf-692f800e1160","Type":"ContainerDied","Data":"f7b0fac36173054512399d213070981224442a2f4d92222f53ec697b747283b2"} Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.777296 5050 scope.go:117] "RemoveContainer" containerID="eb4e689eb68a955994fa963cf3f7945be7325e28479a2c8e50ee1dab06e36bfa" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.777447 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.784106 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0","Type":"ContainerDied","Data":"c9881476728faa88a9db4b40ea1f7b0234a576938052be70ad82d3848c2c9d9e"} Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.784198 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.793794 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"58791cf1-4858-4849-9ada-2a41e6df553e","Type":"ContainerStarted","Data":"1c998b328019560536ec4f21b51c46cc58c82ddfe765b0bbd3efc9b6a0d6376b"} Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.797039 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4kpps" event={"ID":"c9f82d6b-5e75-48cd-b642-55d3fa91f520","Type":"ContainerDied","Data":"c14684211e27c0eaedce9593f3efe371496fb771ab6e9117afa2873b3572e492"} Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.797073 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c14684211e27c0eaedce9593f3efe371496fb771ab6e9117afa2873b3572e492" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.797131 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4kpps" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.807434 5050 scope.go:117] "RemoveContainer" containerID="8d3d44c75e9765a0741de4f78d82b41249cc57552f6951f9543ffdd9ceef8059" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.814018 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.831089 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.841621 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.843969 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.867585 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.868974 5050 scope.go:117] "RemoveContainer" containerID="d14d374837d3d4622c06f3d64c13e8c54f61e743aeaee782c8634d4f2fed8865" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.879196 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.328717541 podStartE2EDuration="17.879175392s" podCreationTimestamp="2026-01-31 05:41:43 +0000 UTC" firstStartedPulling="2026-01-31 05:41:44.485614112 +0000 UTC m=+1229.534775708" lastFinishedPulling="2026-01-31 05:42:00.036071963 +0000 UTC m=+1245.085233559" observedRunningTime="2026-01-31 05:42:00.867855534 +0000 UTC m=+1245.917017130" watchObservedRunningTime="2026-01-31 05:42:00.879175392 +0000 UTC m=+1245.928336988" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.899996 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrp4f\" (UniqueName: \"kubernetes.io/projected/6bde430b-fe13-43d9-b5e8-44c9c4953ad7-kube-api-access-xrp4f\") pod \"nova-api-db-create-zxbwv\" (UID: \"6bde430b-fe13-43d9-b5e8-44c9c4953ad7\") " pod="openstack/nova-api-db-create-zxbwv" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.900061 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bde430b-fe13-43d9-b5e8-44c9c4953ad7-operator-scripts\") pod \"nova-api-db-create-zxbwv\" (UID: \"6bde430b-fe13-43d9-b5e8-44c9c4953ad7\") " pod="openstack/nova-api-db-create-zxbwv" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.908071 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.935611 5050 scope.go:117] "RemoveContainer" containerID="e4074462d594465b50ae06f53cefe020315ab01e5e7ed5ecef494da2691f1347" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.959404 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2kk7d"] Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.960650 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2kk7d" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.977380 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2kk7d"] Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.984011 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-461c-account-create-update-6n75v"] Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.984574 5050 scope.go:117] "RemoveContainer" containerID="efc374ebd56f42f37487a548173a66e3a13b6e3c1477de2e62425a33464c187f" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.985474 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-461c-account-create-update-6n75v" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.988872 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 31 05:42:00 crc kubenswrapper[5050]: I0131 05:42:00.994032 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-461c-account-create-update-6n75v"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.005733 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.005802 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.005852 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qwzm\" (UniqueName: \"kubernetes.io/projected/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-kube-api-access-8qwzm\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.005888 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-config-data\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.005902 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.005929 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrp4f\" (UniqueName: \"kubernetes.io/projected/6bde430b-fe13-43d9-b5e8-44c9c4953ad7-kube-api-access-xrp4f\") pod \"nova-api-db-create-zxbwv\" (UID: \"6bde430b-fe13-43d9-b5e8-44c9c4953ad7\") " pod="openstack/nova-api-db-create-zxbwv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.005982 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bde430b-fe13-43d9-b5e8-44c9c4953ad7-operator-scripts\") pod \"nova-api-db-create-zxbwv\" (UID: \"6bde430b-fe13-43d9-b5e8-44c9c4953ad7\") " pod="openstack/nova-api-db-create-zxbwv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.006019 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-scripts\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.007029 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bde430b-fe13-43d9-b5e8-44c9c4953ad7-operator-scripts\") pod \"nova-api-db-create-zxbwv\" (UID: \"6bde430b-fe13-43d9-b5e8-44c9c4953ad7\") " pod="openstack/nova-api-db-create-zxbwv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.019413 5050 scope.go:117] "RemoveContainer" containerID="4193f76852747704db40eea62cf50bc6746bcb598b82a8df745e617fa1401e5d" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.036499 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrp4f\" (UniqueName: \"kubernetes.io/projected/6bde430b-fe13-43d9-b5e8-44c9c4953ad7-kube-api-access-xrp4f\") pod \"nova-api-db-create-zxbwv\" (UID: \"6bde430b-fe13-43d9-b5e8-44c9c4953ad7\") " pod="openstack/nova-api-db-create-zxbwv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.036564 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.047419 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.059243 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.059783 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zxbwv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.061211 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.063968 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.064060 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.070207 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.078895 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-648cf7894d-hsztl" podUID="2fce74dc-b894-413b-85d2-0b16ab6808e1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": read tcp 10.217.0.2:36122->10.217.0.147:9311: read: connection reset by peer" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.079225 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-648cf7894d-hsztl" podUID="2fce74dc-b894-413b-85d2-0b16ab6808e1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": read tcp 10.217.0.2:36130->10.217.0.147:9311: read: connection reset by peer" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.109207 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4stf\" (UniqueName: \"kubernetes.io/projected/68226938-30ee-43b0-a15b-4ae65840c5b9-kube-api-access-p4stf\") pod \"nova-api-461c-account-create-update-6n75v\" (UID: \"68226938-30ee-43b0-a15b-4ae65840c5b9\") " pod="openstack/nova-api-461c-account-create-update-6n75v" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.109277 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.109303 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68226938-30ee-43b0-a15b-4ae65840c5b9-operator-scripts\") pod \"nova-api-461c-account-create-update-6n75v\" (UID: \"68226938-30ee-43b0-a15b-4ae65840c5b9\") " pod="openstack/nova-api-461c-account-create-update-6n75v" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.109334 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2s2h\" (UniqueName: \"kubernetes.io/projected/4b998de1-8a4c-48c3-a3d5-4bf1309a8394-kube-api-access-g2s2h\") pod \"nova-cell0-db-create-2kk7d\" (UID: \"4b998de1-8a4c-48c3-a3d5-4bf1309a8394\") " pod="openstack/nova-cell0-db-create-2kk7d" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.109369 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qwzm\" (UniqueName: \"kubernetes.io/projected/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-kube-api-access-8qwzm\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.109374 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.109386 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b998de1-8a4c-48c3-a3d5-4bf1309a8394-operator-scripts\") pod \"nova-cell0-db-create-2kk7d\" (UID: \"4b998de1-8a4c-48c3-a3d5-4bf1309a8394\") " pod="openstack/nova-cell0-db-create-2kk7d" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.109445 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-config-data\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.109461 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.109503 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-scripts\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.109541 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.119506 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-scripts\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.119611 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.119807 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.120911 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-config-data\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.161509 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qwzm\" (UniqueName: \"kubernetes.io/projected/7b9ed42c-b571-4eec-b45d-802eaa8cf8b7-kube-api-access-8qwzm\") pod \"cinder-scheduler-0\" (UID: \"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7\") " pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.172143 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-gnrqd"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.174143 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gnrqd" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.182004 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-70b5-account-create-update-kpzzv"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.183051 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.208716 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gnrqd"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.213759 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.214901 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4stf\" (UniqueName: \"kubernetes.io/projected/68226938-30ee-43b0-a15b-4ae65840c5b9-kube-api-access-p4stf\") pod \"nova-api-461c-account-create-update-6n75v\" (UID: \"68226938-30ee-43b0-a15b-4ae65840c5b9\") " pod="openstack/nova-api-461c-account-create-update-6n75v" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.214979 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.215004 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-log-httpd\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.215028 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68226938-30ee-43b0-a15b-4ae65840c5b9-operator-scripts\") pod \"nova-api-461c-account-create-update-6n75v\" (UID: \"68226938-30ee-43b0-a15b-4ae65840c5b9\") " pod="openstack/nova-api-461c-account-create-update-6n75v" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.215053 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-config-data\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.215073 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2s2h\" (UniqueName: \"kubernetes.io/projected/4b998de1-8a4c-48c3-a3d5-4bf1309a8394-kube-api-access-g2s2h\") pod \"nova-cell0-db-create-2kk7d\" (UID: \"4b998de1-8a4c-48c3-a3d5-4bf1309a8394\") " pod="openstack/nova-cell0-db-create-2kk7d" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.215093 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.215112 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b998de1-8a4c-48c3-a3d5-4bf1309a8394-operator-scripts\") pod \"nova-cell0-db-create-2kk7d\" (UID: \"4b998de1-8a4c-48c3-a3d5-4bf1309a8394\") " pod="openstack/nova-cell0-db-create-2kk7d" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.215170 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-run-httpd\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.215185 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-scripts\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.215213 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkkvm\" (UniqueName: \"kubernetes.io/projected/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-kube-api-access-fkkvm\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.216086 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68226938-30ee-43b0-a15b-4ae65840c5b9-operator-scripts\") pod \"nova-api-461c-account-create-update-6n75v\" (UID: \"68226938-30ee-43b0-a15b-4ae65840c5b9\") " pod="openstack/nova-api-461c-account-create-update-6n75v" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.216666 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b998de1-8a4c-48c3-a3d5-4bf1309a8394-operator-scripts\") pod \"nova-cell0-db-create-2kk7d\" (UID: \"4b998de1-8a4c-48c3-a3d5-4bf1309a8394\") " pod="openstack/nova-cell0-db-create-2kk7d" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.224218 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-70b5-account-create-update-kpzzv"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.236118 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.238572 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2s2h\" (UniqueName: \"kubernetes.io/projected/4b998de1-8a4c-48c3-a3d5-4bf1309a8394-kube-api-access-g2s2h\") pod \"nova-cell0-db-create-2kk7d\" (UID: \"4b998de1-8a4c-48c3-a3d5-4bf1309a8394\") " pod="openstack/nova-cell0-db-create-2kk7d" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.279750 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2kk7d" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.281373 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4stf\" (UniqueName: \"kubernetes.io/projected/68226938-30ee-43b0-a15b-4ae65840c5b9-kube-api-access-p4stf\") pod \"nova-api-461c-account-create-update-6n75v\" (UID: \"68226938-30ee-43b0-a15b-4ae65840c5b9\") " pod="openstack/nova-api-461c-account-create-update-6n75v" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.319167 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-run-httpd\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.319210 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-scripts\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.319242 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac-operator-scripts\") pod \"nova-cell1-db-create-gnrqd\" (UID: \"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac\") " pod="openstack/nova-cell1-db-create-gnrqd" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.319274 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkkvm\" (UniqueName: \"kubernetes.io/projected/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-kube-api-access-fkkvm\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.319296 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f173-account-create-update-47tzz"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.319357 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.319383 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-log-httpd\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.319415 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-config-data\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.319440 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.319459 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f88b9496-edac-4fbd-a33b-287b9289d20e-operator-scripts\") pod \"nova-cell0-70b5-account-create-update-kpzzv\" (UID: \"f88b9496-edac-4fbd-a33b-287b9289d20e\") " pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.319501 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxjmn\" (UniqueName: \"kubernetes.io/projected/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac-kube-api-access-nxjmn\") pod \"nova-cell1-db-create-gnrqd\" (UID: \"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac\") " pod="openstack/nova-cell1-db-create-gnrqd" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.319536 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kw52\" (UniqueName: \"kubernetes.io/projected/f88b9496-edac-4fbd-a33b-287b9289d20e-kube-api-access-8kw52\") pod \"nova-cell0-70b5-account-create-update-kpzzv\" (UID: \"f88b9496-edac-4fbd-a33b-287b9289d20e\") " pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.320011 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-run-httpd\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.320315 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f173-account-create-update-47tzz" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.322317 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-log-httpd\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.336850 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.336910 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f173-account-create-update-47tzz"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.337186 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.337441 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-461c-account-create-update-6n75v" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.338868 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-scripts\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.339133 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-config-data\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.339384 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.345373 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkkvm\" (UniqueName: \"kubernetes.io/projected/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-kube-api-access-fkkvm\") pod \"ceilometer-0\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.356780 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-mqhmf"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.415717 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.431328 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.509193 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-mqhmf"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.524487 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kw52\" (UniqueName: \"kubernetes.io/projected/f88b9496-edac-4fbd-a33b-287b9289d20e-kube-api-access-8kw52\") pod \"nova-cell0-70b5-account-create-update-kpzzv\" (UID: \"f88b9496-edac-4fbd-a33b-287b9289d20e\") " pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.524578 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.524639 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.524675 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac-operator-scripts\") pod \"nova-cell1-db-create-gnrqd\" (UID: \"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac\") " pod="openstack/nova-cell1-db-create-gnrqd" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.524767 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-config\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.524886 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b-operator-scripts\") pod \"nova-cell1-f173-account-create-update-47tzz\" (UID: \"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b\") " pod="openstack/nova-cell1-f173-account-create-update-47tzz" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.524964 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f88b9496-edac-4fbd-a33b-287b9289d20e-operator-scripts\") pod \"nova-cell0-70b5-account-create-update-kpzzv\" (UID: \"f88b9496-edac-4fbd-a33b-287b9289d20e\") " pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.525037 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.525074 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxjmn\" (UniqueName: \"kubernetes.io/projected/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac-kube-api-access-nxjmn\") pod \"nova-cell1-db-create-gnrqd\" (UID: \"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac\") " pod="openstack/nova-cell1-db-create-gnrqd" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.525102 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmb2c\" (UniqueName: \"kubernetes.io/projected/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b-kube-api-access-wmb2c\") pod \"nova-cell1-f173-account-create-update-47tzz\" (UID: \"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b\") " pod="openstack/nova-cell1-f173-account-create-update-47tzz" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.525138 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwjcs\" (UniqueName: \"kubernetes.io/projected/ac560e57-d991-4e2f-826b-136d7c6dc075-kube-api-access-fwjcs\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.526629 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac-operator-scripts\") pod \"nova-cell1-db-create-gnrqd\" (UID: \"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac\") " pod="openstack/nova-cell1-db-create-gnrqd" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.538750 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f88b9496-edac-4fbd-a33b-287b9289d20e-operator-scripts\") pod \"nova-cell0-70b5-account-create-update-kpzzv\" (UID: \"f88b9496-edac-4fbd-a33b-287b9289d20e\") " pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.592935 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kw52\" (UniqueName: \"kubernetes.io/projected/f88b9496-edac-4fbd-a33b-287b9289d20e-kube-api-access-8kw52\") pod \"nova-cell0-70b5-account-create-update-kpzzv\" (UID: \"f88b9496-edac-4fbd-a33b-287b9289d20e\") " pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.614538 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxjmn\" (UniqueName: \"kubernetes.io/projected/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac-kube-api-access-nxjmn\") pod \"nova-cell1-db-create-gnrqd\" (UID: \"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac\") " pod="openstack/nova-cell1-db-create-gnrqd" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.631058 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b-operator-scripts\") pod \"nova-cell1-f173-account-create-update-47tzz\" (UID: \"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b\") " pod="openstack/nova-cell1-f173-account-create-update-47tzz" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.631139 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.631164 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmb2c\" (UniqueName: \"kubernetes.io/projected/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b-kube-api-access-wmb2c\") pod \"nova-cell1-f173-account-create-update-47tzz\" (UID: \"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b\") " pod="openstack/nova-cell1-f173-account-create-update-47tzz" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.631181 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwjcs\" (UniqueName: \"kubernetes.io/projected/ac560e57-d991-4e2f-826b-136d7c6dc075-kube-api-access-fwjcs\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.631218 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.631242 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.631279 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-config\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.632018 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-config\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.632491 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b-operator-scripts\") pod \"nova-cell1-f173-account-create-update-47tzz\" (UID: \"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b\") " pod="openstack/nova-cell1-f173-account-create-update-47tzz" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.632978 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.633852 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.642338 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gnrqd" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.642676 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5cc6759b56-tdkxj"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.644098 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.647491 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.659795 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.665591 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.665938 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.666148 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ldx7x" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.666677 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.675312 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwjcs\" (UniqueName: \"kubernetes.io/projected/ac560e57-d991-4e2f-826b-136d7c6dc075-kube-api-access-fwjcs\") pod \"dnsmasq-dns-6d97fcdd8f-mqhmf\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.675523 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmb2c\" (UniqueName: \"kubernetes.io/projected/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b-kube-api-access-wmb2c\") pod \"nova-cell1-f173-account-create-update-47tzz\" (UID: \"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b\") " pod="openstack/nova-cell1-f173-account-create-update-47tzz" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.698145 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cc6759b56-tdkxj"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.732329 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-config\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.732405 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-combined-ca-bundle\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.732435 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-ovndb-tls-certs\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.732455 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-httpd-config\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.732478 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rmkv\" (UniqueName: \"kubernetes.io/projected/1843d770-24a2-4dbd-bf4d-236aab2a27ca-kube-api-access-5rmkv\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.768926 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f173-account-create-update-47tzz" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.777761 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0" path="/var/lib/kubelet/pods/6a1334ea-e70a-4cd5-ae82-7013bf3a8ee0/volumes" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.778645 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4951693-452d-4484-88cf-692f800e1160" path="/var/lib/kubelet/pods/a4951693-452d-4484-88cf-692f800e1160/volumes" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.833747 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-config\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.833828 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-combined-ca-bundle\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.833857 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-ovndb-tls-certs\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.833873 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-httpd-config\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.833893 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rmkv\" (UniqueName: \"kubernetes.io/projected/1843d770-24a2-4dbd-bf4d-236aab2a27ca-kube-api-access-5rmkv\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.841099 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-config\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.852043 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zxbwv"] Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.854393 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.862435 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rmkv\" (UniqueName: \"kubernetes.io/projected/1843d770-24a2-4dbd-bf4d-236aab2a27ca-kube-api-access-5rmkv\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.874403 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-combined-ca-bundle\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.891393 5050 generic.go:334] "Generic (PLEG): container finished" podID="2fce74dc-b894-413b-85d2-0b16ab6808e1" containerID="2a2a0fa4a8bf6bf0b27ec912f5e73042d6714655f74665522e07fe6a702b299e" exitCode=0 Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.891477 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-648cf7894d-hsztl" event={"ID":"2fce74dc-b894-413b-85d2-0b16ab6808e1","Type":"ContainerDied","Data":"2a2a0fa4a8bf6bf0b27ec912f5e73042d6714655f74665522e07fe6a702b299e"} Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.901887 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-httpd-config\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: I0131 05:42:01.909889 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-ovndb-tls-certs\") pod \"neutron-5cc6759b56-tdkxj\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:01 crc kubenswrapper[5050]: W0131 05:42:01.989133 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bde430b_fe13_43d9_b5e8_44c9c4953ad7.slice/crio-747f6a9ddb889268665805678e3b7c9384b4a849f8cf361464a9fdf73d3a496b WatchSource:0}: Error finding container 747f6a9ddb889268665805678e3b7c9384b4a849f8cf361464a9fdf73d3a496b: Status 404 returned error can't find the container with id 747f6a9ddb889268665805678e3b7c9384b4a849f8cf361464a9fdf73d3a496b Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.018746 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.072601 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.163041 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-config-data\") pod \"2fce74dc-b894-413b-85d2-0b16ab6808e1\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.163103 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-combined-ca-bundle\") pod \"2fce74dc-b894-413b-85d2-0b16ab6808e1\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.163125 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fce74dc-b894-413b-85d2-0b16ab6808e1-logs\") pod \"2fce74dc-b894-413b-85d2-0b16ab6808e1\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.163174 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-config-data-custom\") pod \"2fce74dc-b894-413b-85d2-0b16ab6808e1\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.163209 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl2z7\" (UniqueName: \"kubernetes.io/projected/2fce74dc-b894-413b-85d2-0b16ab6808e1-kube-api-access-kl2z7\") pod \"2fce74dc-b894-413b-85d2-0b16ab6808e1\" (UID: \"2fce74dc-b894-413b-85d2-0b16ab6808e1\") " Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.165810 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fce74dc-b894-413b-85d2-0b16ab6808e1-logs" (OuterVolumeSpecName: "logs") pod "2fce74dc-b894-413b-85d2-0b16ab6808e1" (UID: "2fce74dc-b894-413b-85d2-0b16ab6808e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.190510 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2fce74dc-b894-413b-85d2-0b16ab6808e1" (UID: "2fce74dc-b894-413b-85d2-0b16ab6808e1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.210233 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fce74dc-b894-413b-85d2-0b16ab6808e1-kube-api-access-kl2z7" (OuterVolumeSpecName: "kube-api-access-kl2z7") pod "2fce74dc-b894-413b-85d2-0b16ab6808e1" (UID: "2fce74dc-b894-413b-85d2-0b16ab6808e1"). InnerVolumeSpecName "kube-api-access-kl2z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.232765 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fce74dc-b894-413b-85d2-0b16ab6808e1" (UID: "2fce74dc-b894-413b-85d2-0b16ab6808e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.256016 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-config-data" (OuterVolumeSpecName: "config-data") pod "2fce74dc-b894-413b-85d2-0b16ab6808e1" (UID: "2fce74dc-b894-413b-85d2-0b16ab6808e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.264922 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.264947 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.264971 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fce74dc-b894-413b-85d2-0b16ab6808e1-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.264980 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fce74dc-b894-413b-85d2-0b16ab6808e1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.264988 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl2z7\" (UniqueName: \"kubernetes.io/projected/2fce74dc-b894-413b-85d2-0b16ab6808e1-kube-api-access-kl2z7\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:02 crc kubenswrapper[5050]: W0131 05:42:02.356707 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b998de1_8a4c_48c3_a3d5_4bf1309a8394.slice/crio-3891a6849d8c003e144176385ba824e72e07c17ecb40fe9529ccce6d5b4230a5 WatchSource:0}: Error finding container 3891a6849d8c003e144176385ba824e72e07c17ecb40fe9529ccce6d5b4230a5: Status 404 returned error can't find the container with id 3891a6849d8c003e144176385ba824e72e07c17ecb40fe9529ccce6d5b4230a5 Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.357994 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2kk7d"] Jan 31 05:42:02 crc kubenswrapper[5050]: W0131 05:42:02.385637 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b9ed42c_b571_4eec_b45d_802eaa8cf8b7.slice/crio-ad7132b8ffd1adf2d8eda21fab9e8d77a9e4b4c76c13a96d4373c030313773a6 WatchSource:0}: Error finding container ad7132b8ffd1adf2d8eda21fab9e8d77a9e4b4c76c13a96d4373c030313773a6: Status 404 returned error can't find the container with id ad7132b8ffd1adf2d8eda21fab9e8d77a9e4b4c76c13a96d4373c030313773a6 Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.389193 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.455800 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.498422 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-461c-account-create-update-6n75v"] Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.695369 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.914234 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd","Type":"ContainerStarted","Data":"fc08701d77dd7007c5af9e9f670297315bd58aa7c95b81dc5b8e253903ee9ff3"} Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.918889 5050 generic.go:334] "Generic (PLEG): container finished" podID="6bde430b-fe13-43d9-b5e8-44c9c4953ad7" containerID="2dccedd016e1a35024d34d75407373433dc3eca4cdbd0f9ace251083167c1ce7" exitCode=0 Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.919267 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zxbwv" event={"ID":"6bde430b-fe13-43d9-b5e8-44c9c4953ad7","Type":"ContainerDied","Data":"2dccedd016e1a35024d34d75407373433dc3eca4cdbd0f9ace251083167c1ce7"} Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.919293 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zxbwv" event={"ID":"6bde430b-fe13-43d9-b5e8-44c9c4953ad7","Type":"ContainerStarted","Data":"747f6a9ddb889268665805678e3b7c9384b4a849f8cf361464a9fdf73d3a496b"} Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.943487 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-648cf7894d-hsztl" event={"ID":"2fce74dc-b894-413b-85d2-0b16ab6808e1","Type":"ContainerDied","Data":"53fd08ff879621937bc6e6510bab1cbcf10841d208defd983698aa5aeca2f2e2"} Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.943537 5050 scope.go:117] "RemoveContainer" containerID="2a2a0fa4a8bf6bf0b27ec912f5e73042d6714655f74665522e07fe6a702b299e" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.943643 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-648cf7894d-hsztl" Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.956973 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-461c-account-create-update-6n75v" event={"ID":"68226938-30ee-43b0-a15b-4ae65840c5b9","Type":"ContainerStarted","Data":"3fc6e85ff9d452f7dfc90bdd9bcc093fd59d91bf378415ebd70ca8b91e1cae5c"} Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.957008 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-461c-account-create-update-6n75v" event={"ID":"68226938-30ee-43b0-a15b-4ae65840c5b9","Type":"ContainerStarted","Data":"2e3d4e0559a8b5401efa41dcdacaa01e313f6c5b8e1f38f4cfa26c41978905ce"} Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.966854 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7","Type":"ContainerStarted","Data":"ad7132b8ffd1adf2d8eda21fab9e8d77a9e4b4c76c13a96d4373c030313773a6"} Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.982996 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gnrqd"] Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.984282 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2kk7d" event={"ID":"4b998de1-8a4c-48c3-a3d5-4bf1309a8394","Type":"ContainerStarted","Data":"c61f0f85a2885bb6f42a3e2da51334a056a26e80fea47eba6e73d0bd19d0ba27"} Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.984318 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2kk7d" event={"ID":"4b998de1-8a4c-48c3-a3d5-4bf1309a8394","Type":"ContainerStarted","Data":"3891a6849d8c003e144176385ba824e72e07c17ecb40fe9529ccce6d5b4230a5"} Jan 31 05:42:02 crc kubenswrapper[5050]: I0131 05:42:02.993019 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-70b5-account-create-update-kpzzv"] Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.001620 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-461c-account-create-update-6n75v" podStartSLOduration=3.00159825 podStartE2EDuration="3.00159825s" podCreationTimestamp="2026-01-31 05:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:42:02.972125488 +0000 UTC m=+1248.021287084" watchObservedRunningTime="2026-01-31 05:42:03.00159825 +0000 UTC m=+1248.050759836" Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.015608 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f173-account-create-update-47tzz"] Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.018879 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-2kk7d" podStartSLOduration=3.01886067 podStartE2EDuration="3.01886067s" podCreationTimestamp="2026-01-31 05:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:42:03.002035541 +0000 UTC m=+1248.051197137" watchObservedRunningTime="2026-01-31 05:42:03.01886067 +0000 UTC m=+1248.068022266" Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.041077 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-mqhmf"] Jan 31 05:42:03 crc kubenswrapper[5050]: W0131 05:42:03.055430 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf88b9496_edac_4fbd_a33b_287b9289d20e.slice/crio-46ede99d3d0508236bf800aed445c9ad9d59cccfea1911519265b177e49f9dd1 WatchSource:0}: Error finding container 46ede99d3d0508236bf800aed445c9ad9d59cccfea1911519265b177e49f9dd1: Status 404 returned error can't find the container with id 46ede99d3d0508236bf800aed445c9ad9d59cccfea1911519265b177e49f9dd1 Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.073786 5050 scope.go:117] "RemoveContainer" containerID="d0380ac7192163b25a8431d9a7dccddfdeaf903fe2ba8746c1c92276876c0d63" Jan 31 05:42:03 crc kubenswrapper[5050]: W0131 05:42:03.094046 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f1949d9_8ed7_4d51_91d0_82b8e77b6a4b.slice/crio-78b4066f10cdd51eb655d4936a3845dc4d20d954aca26d78d4687179fec926e6 WatchSource:0}: Error finding container 78b4066f10cdd51eb655d4936a3845dc4d20d954aca26d78d4687179fec926e6: Status 404 returned error can't find the container with id 78b4066f10cdd51eb655d4936a3845dc4d20d954aca26d78d4687179fec926e6 Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.166117 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cc6759b56-tdkxj"] Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.232556 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-648cf7894d-hsztl"] Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.240468 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-648cf7894d-hsztl"] Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.757260 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fce74dc-b894-413b-85d2-0b16ab6808e1" path="/var/lib/kubelet/pods/2fce74dc-b894-413b-85d2-0b16ab6808e1/volumes" Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.956899 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86dbc7dc8f-2zfkt"] Jan 31 05:42:03 crc kubenswrapper[5050]: E0131 05:42:03.957322 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce74dc-b894-413b-85d2-0b16ab6808e1" containerName="barbican-api-log" Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.957339 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce74dc-b894-413b-85d2-0b16ab6808e1" containerName="barbican-api-log" Jan 31 05:42:03 crc kubenswrapper[5050]: E0131 05:42:03.957355 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce74dc-b894-413b-85d2-0b16ab6808e1" containerName="barbican-api" Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.957361 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce74dc-b894-413b-85d2-0b16ab6808e1" containerName="barbican-api" Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.957514 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fce74dc-b894-413b-85d2-0b16ab6808e1" containerName="barbican-api" Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.957541 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fce74dc-b894-413b-85d2-0b16ab6808e1" containerName="barbican-api-log" Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.958410 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.961516 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.961690 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 31 05:42:03 crc kubenswrapper[5050]: I0131 05:42:03.984689 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86dbc7dc8f-2zfkt"] Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.005341 5050 generic.go:334] "Generic (PLEG): container finished" podID="68226938-30ee-43b0-a15b-4ae65840c5b9" containerID="3fc6e85ff9d452f7dfc90bdd9bcc093fd59d91bf378415ebd70ca8b91e1cae5c" exitCode=0 Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.005399 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-461c-account-create-update-6n75v" event={"ID":"68226938-30ee-43b0-a15b-4ae65840c5b9","Type":"ContainerDied","Data":"3fc6e85ff9d452f7dfc90bdd9bcc093fd59d91bf378415ebd70ca8b91e1cae5c"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.008726 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-internal-tls-certs\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.008786 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-combined-ca-bundle\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.008803 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbhbr\" (UniqueName: \"kubernetes.io/projected/da670c32-ca2c-438a-a05a-bc6e23779a60-kube-api-access-mbhbr\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.008879 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-ovndb-tls-certs\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.008914 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-public-tls-certs\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.009008 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-httpd-config\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.009077 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-config\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.027185 5050 generic.go:334] "Generic (PLEG): container finished" podID="f88b9496-edac-4fbd-a33b-287b9289d20e" containerID="ec73b421c7e6d9e0dee12a373da83b30f79ab985560b0ef30c0a4587e7612e2c" exitCode=0 Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.027265 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" event={"ID":"f88b9496-edac-4fbd-a33b-287b9289d20e","Type":"ContainerDied","Data":"ec73b421c7e6d9e0dee12a373da83b30f79ab985560b0ef30c0a4587e7612e2c"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.027288 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" event={"ID":"f88b9496-edac-4fbd-a33b-287b9289d20e","Type":"ContainerStarted","Data":"46ede99d3d0508236bf800aed445c9ad9d59cccfea1911519265b177e49f9dd1"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.038497 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cc6759b56-tdkxj" event={"ID":"1843d770-24a2-4dbd-bf4d-236aab2a27ca","Type":"ContainerStarted","Data":"6a01325ab7e2d50cbeb3ed0500c6ca3f37e0df1aef683d91db10d57575ba8b35"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.038540 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cc6759b56-tdkxj" event={"ID":"1843d770-24a2-4dbd-bf4d-236aab2a27ca","Type":"ContainerStarted","Data":"c635e4689718ddbdb1c57a1fa94f442ed6ae65dec539b7177faf07c8b2a24837"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.041061 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7","Type":"ContainerStarted","Data":"9c263d8cc26f132cf23a97bd553a2466371e14ffe39b4f6f9f01b72f1be5b20f"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.057424 5050 generic.go:334] "Generic (PLEG): container finished" podID="ac560e57-d991-4e2f-826b-136d7c6dc075" containerID="d15223da2b54567712405cac1546eedc57bec271bfe988a5626f6a0ab8f17f78" exitCode=0 Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.057565 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" event={"ID":"ac560e57-d991-4e2f-826b-136d7c6dc075","Type":"ContainerDied","Data":"d15223da2b54567712405cac1546eedc57bec271bfe988a5626f6a0ab8f17f78"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.057607 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" event={"ID":"ac560e57-d991-4e2f-826b-136d7c6dc075","Type":"ContainerStarted","Data":"a99bed8e8460722ddf4d1b6ed9f4112e149bb9ab97f021f2b716951202d7bbc4"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.065211 5050 generic.go:334] "Generic (PLEG): container finished" podID="7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b" containerID="462f853a42a44766e43edd798d9c85e04dadd25e65dc04fcc0618e3d650bccc4" exitCode=0 Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.065284 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f173-account-create-update-47tzz" event={"ID":"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b","Type":"ContainerDied","Data":"462f853a42a44766e43edd798d9c85e04dadd25e65dc04fcc0618e3d650bccc4"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.065306 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f173-account-create-update-47tzz" event={"ID":"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b","Type":"ContainerStarted","Data":"78b4066f10cdd51eb655d4936a3845dc4d20d954aca26d78d4687179fec926e6"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.070734 5050 generic.go:334] "Generic (PLEG): container finished" podID="d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac" containerID="e590d65552cbcb107f36936ba5038cdf01ad09707d0da9c596907436b63a0ea3" exitCode=0 Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.071079 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gnrqd" event={"ID":"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac","Type":"ContainerDied","Data":"e590d65552cbcb107f36936ba5038cdf01ad09707d0da9c596907436b63a0ea3"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.071100 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gnrqd" event={"ID":"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac","Type":"ContainerStarted","Data":"c90d8284e062728fd5f24f61041f91b2b660001677b62ae1c2c96efa178fa593"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.107369 5050 generic.go:334] "Generic (PLEG): container finished" podID="4b998de1-8a4c-48c3-a3d5-4bf1309a8394" containerID="c61f0f85a2885bb6f42a3e2da51334a056a26e80fea47eba6e73d0bd19d0ba27" exitCode=0 Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.107626 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2kk7d" event={"ID":"4b998de1-8a4c-48c3-a3d5-4bf1309a8394","Type":"ContainerDied","Data":"c61f0f85a2885bb6f42a3e2da51334a056a26e80fea47eba6e73d0bd19d0ba27"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.112831 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-ovndb-tls-certs\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.113055 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-public-tls-certs\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.113179 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-httpd-config\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.113270 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-config\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.113400 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-internal-tls-certs\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.113466 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-combined-ca-bundle\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.113525 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbhbr\" (UniqueName: \"kubernetes.io/projected/da670c32-ca2c-438a-a05a-bc6e23779a60-kube-api-access-mbhbr\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.120990 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-ovndb-tls-certs\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.131177 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd","Type":"ContainerStarted","Data":"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454"} Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.131943 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-public-tls-certs\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.138646 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-combined-ca-bundle\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.152755 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-httpd-config\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.166760 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-config\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.167256 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbhbr\" (UniqueName: \"kubernetes.io/projected/da670c32-ca2c-438a-a05a-bc6e23779a60-kube-api-access-mbhbr\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.167279 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da670c32-ca2c-438a-a05a-bc6e23779a60-internal-tls-certs\") pod \"neutron-86dbc7dc8f-2zfkt\" (UID: \"da670c32-ca2c-438a-a05a-bc6e23779a60\") " pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.312368 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.740266 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zxbwv" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.834603 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bde430b-fe13-43d9-b5e8-44c9c4953ad7-operator-scripts\") pod \"6bde430b-fe13-43d9-b5e8-44c9c4953ad7\" (UID: \"6bde430b-fe13-43d9-b5e8-44c9c4953ad7\") " Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.834715 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrp4f\" (UniqueName: \"kubernetes.io/projected/6bde430b-fe13-43d9-b5e8-44c9c4953ad7-kube-api-access-xrp4f\") pod \"6bde430b-fe13-43d9-b5e8-44c9c4953ad7\" (UID: \"6bde430b-fe13-43d9-b5e8-44c9c4953ad7\") " Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.835712 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bde430b-fe13-43d9-b5e8-44c9c4953ad7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bde430b-fe13-43d9-b5e8-44c9c4953ad7" (UID: "6bde430b-fe13-43d9-b5e8-44c9c4953ad7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.851153 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bde430b-fe13-43d9-b5e8-44c9c4953ad7-kube-api-access-xrp4f" (OuterVolumeSpecName: "kube-api-access-xrp4f") pod "6bde430b-fe13-43d9-b5e8-44c9c4953ad7" (UID: "6bde430b-fe13-43d9-b5e8-44c9c4953ad7"). InnerVolumeSpecName "kube-api-access-xrp4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.937259 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bde430b-fe13-43d9-b5e8-44c9c4953ad7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:04 crc kubenswrapper[5050]: I0131 05:42:04.937292 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrp4f\" (UniqueName: \"kubernetes.io/projected/6bde430b-fe13-43d9-b5e8-44c9c4953ad7-kube-api-access-xrp4f\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.020215 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86dbc7dc8f-2zfkt"] Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.193905 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" event={"ID":"ac560e57-d991-4e2f-826b-136d7c6dc075","Type":"ContainerStarted","Data":"f59c109c088d566c5d6d1cda5458240500551948426de5c20e5d4bdd962cfcf3"} Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.194849 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.198231 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cc6759b56-tdkxj" event={"ID":"1843d770-24a2-4dbd-bf4d-236aab2a27ca","Type":"ContainerStarted","Data":"5a6d399784e94e65e82455e1c1259b465602ddbc4089f1e0eee25f38e82389f9"} Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.198366 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.201255 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7","Type":"ContainerStarted","Data":"bc677fbab02826d316846c00d9a92c35b76ef63be2a6d24d3593a078561e6c26"} Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.207544 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd","Type":"ContainerStarted","Data":"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535"} Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.210295 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86dbc7dc8f-2zfkt" event={"ID":"da670c32-ca2c-438a-a05a-bc6e23779a60","Type":"ContainerStarted","Data":"7eb5750d33a37a1fe105b9aa985a279889106dfd9833fe97886ba0db5b6381f7"} Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.211835 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zxbwv" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.212183 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zxbwv" event={"ID":"6bde430b-fe13-43d9-b5e8-44c9c4953ad7","Type":"ContainerDied","Data":"747f6a9ddb889268665805678e3b7c9384b4a849f8cf361464a9fdf73d3a496b"} Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.212237 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="747f6a9ddb889268665805678e3b7c9384b4a849f8cf361464a9fdf73d3a496b" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.226890 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" podStartSLOduration=4.226870826 podStartE2EDuration="4.226870826s" podCreationTimestamp="2026-01-31 05:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:42:05.216350149 +0000 UTC m=+1250.265511745" watchObservedRunningTime="2026-01-31 05:42:05.226870826 +0000 UTC m=+1250.276032422" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.243728 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5cc6759b56-tdkxj" podStartSLOduration=4.243702594 podStartE2EDuration="4.243702594s" podCreationTimestamp="2026-01-31 05:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:42:05.238993315 +0000 UTC m=+1250.288154911" watchObservedRunningTime="2026-01-31 05:42:05.243702594 +0000 UTC m=+1250.292864190" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.622128 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.653877 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.6538406519999995 podStartE2EDuration="5.653840652s" podCreationTimestamp="2026-01-31 05:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:42:05.284443062 +0000 UTC m=+1250.333604658" watchObservedRunningTime="2026-01-31 05:42:05.653840652 +0000 UTC m=+1250.703002248" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.754414 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f88b9496-edac-4fbd-a33b-287b9289d20e-operator-scripts\") pod \"f88b9496-edac-4fbd-a33b-287b9289d20e\" (UID: \"f88b9496-edac-4fbd-a33b-287b9289d20e\") " Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.754478 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kw52\" (UniqueName: \"kubernetes.io/projected/f88b9496-edac-4fbd-a33b-287b9289d20e-kube-api-access-8kw52\") pod \"f88b9496-edac-4fbd-a33b-287b9289d20e\" (UID: \"f88b9496-edac-4fbd-a33b-287b9289d20e\") " Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.785922 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88b9496-edac-4fbd-a33b-287b9289d20e-kube-api-access-8kw52" (OuterVolumeSpecName: "kube-api-access-8kw52") pod "f88b9496-edac-4fbd-a33b-287b9289d20e" (UID: "f88b9496-edac-4fbd-a33b-287b9289d20e"). InnerVolumeSpecName "kube-api-access-8kw52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.796256 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f88b9496-edac-4fbd-a33b-287b9289d20e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f88b9496-edac-4fbd-a33b-287b9289d20e" (UID: "f88b9496-edac-4fbd-a33b-287b9289d20e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.832247 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-461c-account-create-update-6n75v" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.856907 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f88b9496-edac-4fbd-a33b-287b9289d20e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.856978 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kw52\" (UniqueName: \"kubernetes.io/projected/f88b9496-edac-4fbd-a33b-287b9289d20e-kube-api-access-8kw52\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.967212 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68226938-30ee-43b0-a15b-4ae65840c5b9-operator-scripts\") pod \"68226938-30ee-43b0-a15b-4ae65840c5b9\" (UID: \"68226938-30ee-43b0-a15b-4ae65840c5b9\") " Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.968142 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68226938-30ee-43b0-a15b-4ae65840c5b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68226938-30ee-43b0-a15b-4ae65840c5b9" (UID: "68226938-30ee-43b0-a15b-4ae65840c5b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.968175 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4stf\" (UniqueName: \"kubernetes.io/projected/68226938-30ee-43b0-a15b-4ae65840c5b9-kube-api-access-p4stf\") pod \"68226938-30ee-43b0-a15b-4ae65840c5b9\" (UID: \"68226938-30ee-43b0-a15b-4ae65840c5b9\") " Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.969242 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68226938-30ee-43b0-a15b-4ae65840c5b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.980639 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68226938-30ee-43b0-a15b-4ae65840c5b9-kube-api-access-p4stf" (OuterVolumeSpecName: "kube-api-access-p4stf") pod "68226938-30ee-43b0-a15b-4ae65840c5b9" (UID: "68226938-30ee-43b0-a15b-4ae65840c5b9"). InnerVolumeSpecName "kube-api-access-p4stf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:05 crc kubenswrapper[5050]: I0131 05:42:05.999526 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gnrqd" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.024700 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2kk7d" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.032080 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f173-account-create-update-47tzz" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.081252 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4stf\" (UniqueName: \"kubernetes.io/projected/68226938-30ee-43b0-a15b-4ae65840c5b9-kube-api-access-p4stf\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.182287 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b-operator-scripts\") pod \"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b\" (UID: \"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b\") " Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.182373 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmb2c\" (UniqueName: \"kubernetes.io/projected/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b-kube-api-access-wmb2c\") pod \"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b\" (UID: \"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b\") " Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.182472 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b998de1-8a4c-48c3-a3d5-4bf1309a8394-operator-scripts\") pod \"4b998de1-8a4c-48c3-a3d5-4bf1309a8394\" (UID: \"4b998de1-8a4c-48c3-a3d5-4bf1309a8394\") " Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.182506 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac-operator-scripts\") pod \"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac\" (UID: \"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac\") " Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.182559 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxjmn\" (UniqueName: \"kubernetes.io/projected/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac-kube-api-access-nxjmn\") pod \"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac\" (UID: \"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac\") " Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.182589 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2s2h\" (UniqueName: \"kubernetes.io/projected/4b998de1-8a4c-48c3-a3d5-4bf1309a8394-kube-api-access-g2s2h\") pod \"4b998de1-8a4c-48c3-a3d5-4bf1309a8394\" (UID: \"4b998de1-8a4c-48c3-a3d5-4bf1309a8394\") " Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.182735 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b" (UID: "7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.182929 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.183014 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b998de1-8a4c-48c3-a3d5-4bf1309a8394-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b998de1-8a4c-48c3-a3d5-4bf1309a8394" (UID: "4b998de1-8a4c-48c3-a3d5-4bf1309a8394"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.183522 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac" (UID: "d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.188704 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b998de1-8a4c-48c3-a3d5-4bf1309a8394-kube-api-access-g2s2h" (OuterVolumeSpecName: "kube-api-access-g2s2h") pod "4b998de1-8a4c-48c3-a3d5-4bf1309a8394" (UID: "4b998de1-8a4c-48c3-a3d5-4bf1309a8394"). InnerVolumeSpecName "kube-api-access-g2s2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.188982 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b-kube-api-access-wmb2c" (OuterVolumeSpecName: "kube-api-access-wmb2c") pod "7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b" (UID: "7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b"). InnerVolumeSpecName "kube-api-access-wmb2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.190160 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac-kube-api-access-nxjmn" (OuterVolumeSpecName: "kube-api-access-nxjmn") pod "d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac" (UID: "d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac"). InnerVolumeSpecName "kube-api-access-nxjmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.224832 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" event={"ID":"f88b9496-edac-4fbd-a33b-287b9289d20e","Type":"ContainerDied","Data":"46ede99d3d0508236bf800aed445c9ad9d59cccfea1911519265b177e49f9dd1"} Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.224868 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46ede99d3d0508236bf800aed445c9ad9d59cccfea1911519265b177e49f9dd1" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.224918 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-70b5-account-create-update-kpzzv" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.228915 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gnrqd" event={"ID":"d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac","Type":"ContainerDied","Data":"c90d8284e062728fd5f24f61041f91b2b660001677b62ae1c2c96efa178fa593"} Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.228960 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c90d8284e062728fd5f24f61041f91b2b660001677b62ae1c2c96efa178fa593" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.229023 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gnrqd" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.238773 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.242536 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2kk7d" event={"ID":"4b998de1-8a4c-48c3-a3d5-4bf1309a8394","Type":"ContainerDied","Data":"3891a6849d8c003e144176385ba824e72e07c17ecb40fe9529ccce6d5b4230a5"} Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.242571 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3891a6849d8c003e144176385ba824e72e07c17ecb40fe9529ccce6d5b4230a5" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.242617 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2kk7d" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.246592 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd","Type":"ContainerStarted","Data":"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac"} Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.248325 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86dbc7dc8f-2zfkt" event={"ID":"da670c32-ca2c-438a-a05a-bc6e23779a60","Type":"ContainerStarted","Data":"f16e1fd0ed96360e8a0a3d57b55951592e9e33f0fe1795ac695474b341ab2a3d"} Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.249745 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f173-account-create-update-47tzz" event={"ID":"7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b","Type":"ContainerDied","Data":"78b4066f10cdd51eb655d4936a3845dc4d20d954aca26d78d4687179fec926e6"} Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.249768 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78b4066f10cdd51eb655d4936a3845dc4d20d954aca26d78d4687179fec926e6" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.249817 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f173-account-create-update-47tzz" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.264156 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-461c-account-create-update-6n75v" event={"ID":"68226938-30ee-43b0-a15b-4ae65840c5b9","Type":"ContainerDied","Data":"2e3d4e0559a8b5401efa41dcdacaa01e313f6c5b8e1f38f4cfa26c41978905ce"} Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.264196 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e3d4e0559a8b5401efa41dcdacaa01e313f6c5b8e1f38f4cfa26c41978905ce" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.264349 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-461c-account-create-update-6n75v" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.284056 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmb2c\" (UniqueName: \"kubernetes.io/projected/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b-kube-api-access-wmb2c\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.284081 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b998de1-8a4c-48c3-a3d5-4bf1309a8394-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.284090 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.284105 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxjmn\" (UniqueName: \"kubernetes.io/projected/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac-kube-api-access-nxjmn\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:06 crc kubenswrapper[5050]: I0131 05:42:06.284114 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2s2h\" (UniqueName: \"kubernetes.io/projected/4b998de1-8a4c-48c3-a3d5-4bf1309a8394-kube-api-access-g2s2h\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:07 crc kubenswrapper[5050]: I0131 05:42:07.277098 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86dbc7dc8f-2zfkt" event={"ID":"da670c32-ca2c-438a-a05a-bc6e23779a60","Type":"ContainerStarted","Data":"a2d141ca3a2073344c1af61a9fbbb7f36e1218f71b14ae0c8b2368f232411312"} Jan 31 05:42:07 crc kubenswrapper[5050]: I0131 05:42:07.277218 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:07 crc kubenswrapper[5050]: I0131 05:42:07.298453 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86dbc7dc8f-2zfkt" podStartSLOduration=4.298429519 podStartE2EDuration="4.298429519s" podCreationTimestamp="2026-01-31 05:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:42:07.292734894 +0000 UTC m=+1252.341896500" watchObservedRunningTime="2026-01-31 05:42:07.298429519 +0000 UTC m=+1252.347591115" Jan 31 05:42:07 crc kubenswrapper[5050]: I0131 05:42:07.896156 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:42:07 crc kubenswrapper[5050]: I0131 05:42:07.971607 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d9964f68-4b9hp" Jan 31 05:42:08 crc kubenswrapper[5050]: I0131 05:42:08.285189 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd","Type":"ContainerStarted","Data":"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8"} Jan 31 05:42:08 crc kubenswrapper[5050]: I0131 05:42:08.285470 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="sg-core" containerID="cri-o://6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac" gracePeriod=30 Jan 31 05:42:08 crc kubenswrapper[5050]: I0131 05:42:08.285488 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="ceilometer-notification-agent" containerID="cri-o://eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535" gracePeriod=30 Jan 31 05:42:08 crc kubenswrapper[5050]: I0131 05:42:08.285788 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="proxy-httpd" containerID="cri-o://11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8" gracePeriod=30 Jan 31 05:42:08 crc kubenswrapper[5050]: I0131 05:42:08.286857 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="ceilometer-central-agent" containerID="cri-o://07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454" gracePeriod=30 Jan 31 05:42:08 crc kubenswrapper[5050]: I0131 05:42:08.308935 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.717330393 podStartE2EDuration="8.308911433s" podCreationTimestamp="2026-01-31 05:42:00 +0000 UTC" firstStartedPulling="2026-01-31 05:42:02.727600784 +0000 UTC m=+1247.776762380" lastFinishedPulling="2026-01-31 05:42:07.319181824 +0000 UTC m=+1252.368343420" observedRunningTime="2026-01-31 05:42:08.302044716 +0000 UTC m=+1253.351206332" watchObservedRunningTime="2026-01-31 05:42:08.308911433 +0000 UTC m=+1253.358073019" Jan 31 05:42:08 crc kubenswrapper[5050]: I0131 05:42:08.992535 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.132660 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-log-httpd\") pod \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.132714 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-config-data\") pod \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.133095 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-combined-ca-bundle\") pod \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.133198 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-scripts\") pod \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.133269 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkkvm\" (UniqueName: \"kubernetes.io/projected/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-kube-api-access-fkkvm\") pod \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.133339 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-sg-core-conf-yaml\") pod \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.133485 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" (UID: "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.133545 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-run-httpd\") pod \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\" (UID: \"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd\") " Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.133787 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" (UID: "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.134519 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.134544 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.143398 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-kube-api-access-fkkvm" (OuterVolumeSpecName: "kube-api-access-fkkvm") pod "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" (UID: "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd"). InnerVolumeSpecName "kube-api-access-fkkvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.147090 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-scripts" (OuterVolumeSpecName: "scripts") pod "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" (UID: "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.190093 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" (UID: "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.241816 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.241849 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkkvm\" (UniqueName: \"kubernetes.io/projected/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-kube-api-access-fkkvm\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.241863 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.275923 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" (UID: "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.284775 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-config-data" (OuterVolumeSpecName: "config-data") pod "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" (UID: "f6aea0b9-91ca-42d9-88bf-92d11ffc26bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.294388 5050 generic.go:334] "Generic (PLEG): container finished" podID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerID="11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8" exitCode=0 Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.294419 5050 generic.go:334] "Generic (PLEG): container finished" podID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerID="6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac" exitCode=2 Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.294429 5050 generic.go:334] "Generic (PLEG): container finished" podID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerID="eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535" exitCode=0 Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.294439 5050 generic.go:334] "Generic (PLEG): container finished" podID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerID="07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454" exitCode=0 Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.294457 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd","Type":"ContainerDied","Data":"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8"} Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.294464 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.294483 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd","Type":"ContainerDied","Data":"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac"} Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.294494 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd","Type":"ContainerDied","Data":"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535"} Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.294502 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd","Type":"ContainerDied","Data":"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454"} Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.294513 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6aea0b9-91ca-42d9-88bf-92d11ffc26bd","Type":"ContainerDied","Data":"fc08701d77dd7007c5af9e9f670297315bd58aa7c95b81dc5b8e253903ee9ff3"} Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.294528 5050 scope.go:117] "RemoveContainer" containerID="11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.316084 5050 scope.go:117] "RemoveContainer" containerID="6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.338816 5050 scope.go:117] "RemoveContainer" containerID="eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.339090 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.343089 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.343122 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.348332 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.356931 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.357347 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="proxy-httpd" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357373 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="proxy-httpd" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.357389 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="ceilometer-notification-agent" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357398 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="ceilometer-notification-agent" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.357423 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac" containerName="mariadb-database-create" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357431 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac" containerName="mariadb-database-create" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.357443 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bde430b-fe13-43d9-b5e8-44c9c4953ad7" containerName="mariadb-database-create" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357450 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bde430b-fe13-43d9-b5e8-44c9c4953ad7" containerName="mariadb-database-create" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.357466 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="ceilometer-central-agent" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357474 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="ceilometer-central-agent" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.357484 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b998de1-8a4c-48c3-a3d5-4bf1309a8394" containerName="mariadb-database-create" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357492 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b998de1-8a4c-48c3-a3d5-4bf1309a8394" containerName="mariadb-database-create" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.357501 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b" containerName="mariadb-account-create-update" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357509 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b" containerName="mariadb-account-create-update" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.357523 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88b9496-edac-4fbd-a33b-287b9289d20e" containerName="mariadb-account-create-update" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357529 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88b9496-edac-4fbd-a33b-287b9289d20e" containerName="mariadb-account-create-update" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.357544 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="sg-core" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357551 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="sg-core" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.357563 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68226938-30ee-43b0-a15b-4ae65840c5b9" containerName="mariadb-account-create-update" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357568 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="68226938-30ee-43b0-a15b-4ae65840c5b9" containerName="mariadb-account-create-update" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357711 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88b9496-edac-4fbd-a33b-287b9289d20e" containerName="mariadb-account-create-update" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357725 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="ceilometer-central-agent" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357736 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac" containerName="mariadb-database-create" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357742 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="ceilometer-notification-agent" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357749 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bde430b-fe13-43d9-b5e8-44c9c4953ad7" containerName="mariadb-database-create" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357758 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="68226938-30ee-43b0-a15b-4ae65840c5b9" containerName="mariadb-account-create-update" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357765 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="sg-core" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357772 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b" containerName="mariadb-account-create-update" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357782 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" containerName="proxy-httpd" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.357793 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b998de1-8a4c-48c3-a3d5-4bf1309a8394" containerName="mariadb-database-create" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.359501 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.363369 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.363465 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.364037 5050 scope.go:117] "RemoveContainer" containerID="07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.373576 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.391718 5050 scope.go:117] "RemoveContainer" containerID="11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.393296 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8\": container with ID starting with 11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8 not found: ID does not exist" containerID="11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.393326 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8"} err="failed to get container status \"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8\": rpc error: code = NotFound desc = could not find container \"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8\": container with ID starting with 11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8 not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.393349 5050 scope.go:117] "RemoveContainer" containerID="6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.393540 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac\": container with ID starting with 6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac not found: ID does not exist" containerID="6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.393560 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac"} err="failed to get container status \"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac\": rpc error: code = NotFound desc = could not find container \"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac\": container with ID starting with 6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.393573 5050 scope.go:117] "RemoveContainer" containerID="eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.393775 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535\": container with ID starting with eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535 not found: ID does not exist" containerID="eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.393792 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535"} err="failed to get container status \"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535\": rpc error: code = NotFound desc = could not find container \"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535\": container with ID starting with eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535 not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.393805 5050 scope.go:117] "RemoveContainer" containerID="07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454" Jan 31 05:42:09 crc kubenswrapper[5050]: E0131 05:42:09.393994 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454\": container with ID starting with 07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454 not found: ID does not exist" containerID="07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.394010 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454"} err="failed to get container status \"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454\": rpc error: code = NotFound desc = could not find container \"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454\": container with ID starting with 07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454 not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.394023 5050 scope.go:117] "RemoveContainer" containerID="11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.394180 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8"} err="failed to get container status \"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8\": rpc error: code = NotFound desc = could not find container \"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8\": container with ID starting with 11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8 not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.394197 5050 scope.go:117] "RemoveContainer" containerID="6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.394340 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac"} err="failed to get container status \"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac\": rpc error: code = NotFound desc = could not find container \"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac\": container with ID starting with 6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.394356 5050 scope.go:117] "RemoveContainer" containerID="eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.394493 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535"} err="failed to get container status \"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535\": rpc error: code = NotFound desc = could not find container \"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535\": container with ID starting with eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535 not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.394509 5050 scope.go:117] "RemoveContainer" containerID="07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.394648 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454"} err="failed to get container status \"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454\": rpc error: code = NotFound desc = could not find container \"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454\": container with ID starting with 07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454 not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.394663 5050 scope.go:117] "RemoveContainer" containerID="11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.394804 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8"} err="failed to get container status \"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8\": rpc error: code = NotFound desc = could not find container \"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8\": container with ID starting with 11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8 not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.394819 5050 scope.go:117] "RemoveContainer" containerID="6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395006 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac"} err="failed to get container status \"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac\": rpc error: code = NotFound desc = could not find container \"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac\": container with ID starting with 6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395022 5050 scope.go:117] "RemoveContainer" containerID="eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395164 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535"} err="failed to get container status \"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535\": rpc error: code = NotFound desc = could not find container \"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535\": container with ID starting with eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535 not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395179 5050 scope.go:117] "RemoveContainer" containerID="07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395313 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454"} err="failed to get container status \"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454\": rpc error: code = NotFound desc = could not find container \"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454\": container with ID starting with 07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454 not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395327 5050 scope.go:117] "RemoveContainer" containerID="11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395464 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8"} err="failed to get container status \"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8\": rpc error: code = NotFound desc = could not find container \"11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8\": container with ID starting with 11b68f50840b354782144c5be1f43e164d7f59097c30ff0b6b70ee8385dd33e8 not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395478 5050 scope.go:117] "RemoveContainer" containerID="6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395617 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac"} err="failed to get container status \"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac\": rpc error: code = NotFound desc = could not find container \"6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac\": container with ID starting with 6b5b549fbb6fcf5568df82996dbb442dc9cabbf42e342be5ae96c4b8f8f90eac not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395632 5050 scope.go:117] "RemoveContainer" containerID="eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395770 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535"} err="failed to get container status \"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535\": rpc error: code = NotFound desc = could not find container \"eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535\": container with ID starting with eacc900a5d01786ff3f162b6df951c9e87c3454c3a88a3640ff5e43acf618535 not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395784 5050 scope.go:117] "RemoveContainer" containerID="07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.395922 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454"} err="failed to get container status \"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454\": rpc error: code = NotFound desc = could not find container \"07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454\": container with ID starting with 07d35e792193530a5acff9f6c9d541c9fe26535c6081af1908245f5e10e57454 not found: ID does not exist" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.446233 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.446303 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/781361ec-578f-4ba7-864d-d1913d7714df-run-httpd\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.446332 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b47n5\" (UniqueName: \"kubernetes.io/projected/781361ec-578f-4ba7-864d-d1913d7714df-kube-api-access-b47n5\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.446360 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-scripts\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.446386 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.446407 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-config-data\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.446427 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/781361ec-578f-4ba7-864d-d1913d7714df-log-httpd\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.570870 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.571335 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-config-data\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.571405 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/781361ec-578f-4ba7-864d-d1913d7714df-log-httpd\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.571720 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.572409 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/781361ec-578f-4ba7-864d-d1913d7714df-run-httpd\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.572531 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b47n5\" (UniqueName: \"kubernetes.io/projected/781361ec-578f-4ba7-864d-d1913d7714df-kube-api-access-b47n5\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.572597 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-scripts\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.573018 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/781361ec-578f-4ba7-864d-d1913d7714df-run-httpd\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.573591 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/781361ec-578f-4ba7-864d-d1913d7714df-log-httpd\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.578797 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-config-data\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.579097 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.579469 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.600423 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-scripts\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.600545 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b47n5\" (UniqueName: \"kubernetes.io/projected/781361ec-578f-4ba7-864d-d1913d7714df-kube-api-access-b47n5\") pod \"ceilometer-0\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.677555 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:09 crc kubenswrapper[5050]: I0131 05:42:09.748206 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6aea0b9-91ca-42d9-88bf-92d11ffc26bd" path="/var/lib/kubelet/pods/f6aea0b9-91ca-42d9-88bf-92d11ffc26bd/volumes" Jan 31 05:42:10 crc kubenswrapper[5050]: I0131 05:42:10.177627 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:10 crc kubenswrapper[5050]: W0131 05:42:10.189935 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod781361ec_578f_4ba7_864d_d1913d7714df.slice/crio-9db59275311d05d7195a58e1c577b5d3f172b9926c0a1bdd40a1fd7e9042f7ff WatchSource:0}: Error finding container 9db59275311d05d7195a58e1c577b5d3f172b9926c0a1bdd40a1fd7e9042f7ff: Status 404 returned error can't find the container with id 9db59275311d05d7195a58e1c577b5d3f172b9926c0a1bdd40a1fd7e9042f7ff Jan 31 05:42:10 crc kubenswrapper[5050]: I0131 05:42:10.305109 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"781361ec-578f-4ba7-864d-d1913d7714df","Type":"ContainerStarted","Data":"9db59275311d05d7195a58e1c577b5d3f172b9926c0a1bdd40a1fd7e9042f7ff"} Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.317252 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"781361ec-578f-4ba7-864d-d1913d7714df","Type":"ContainerStarted","Data":"103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5"} Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.354774 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sf47v"] Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.355812 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.360124 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.360379 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mccqk" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.360212 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.375219 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sf47v"] Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.508812 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47z9\" (UniqueName: \"kubernetes.io/projected/9480c5f7-4801-47d5-abe0-3a7281596b0b-kube-api-access-q47z9\") pod \"nova-cell0-conductor-db-sync-sf47v\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.509006 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sf47v\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.509098 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-scripts\") pod \"nova-cell0-conductor-db-sync-sf47v\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.509199 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-config-data\") pod \"nova-cell0-conductor-db-sync-sf47v\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.519403 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.610512 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-scripts\") pod \"nova-cell0-conductor-db-sync-sf47v\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.610977 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-config-data\") pod \"nova-cell0-conductor-db-sync-sf47v\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.611030 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q47z9\" (UniqueName: \"kubernetes.io/projected/9480c5f7-4801-47d5-abe0-3a7281596b0b-kube-api-access-q47z9\") pod \"nova-cell0-conductor-db-sync-sf47v\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.611100 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sf47v\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.614304 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-scripts\") pod \"nova-cell0-conductor-db-sync-sf47v\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.618967 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-config-data\") pod \"nova-cell0-conductor-db-sync-sf47v\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.619722 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sf47v\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.634555 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q47z9\" (UniqueName: \"kubernetes.io/projected/9480c5f7-4801-47d5-abe0-3a7281596b0b-kube-api-access-q47z9\") pod \"nova-cell0-conductor-db-sync-sf47v\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.670139 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.856179 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.923506 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-zw6q9"] Jan 31 05:42:11 crc kubenswrapper[5050]: I0131 05:42:11.923708 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" podUID="d544bf99-86ca-41e6-9b6d-c19906cbf426" containerName="dnsmasq-dns" containerID="cri-o://7c37ecb75da725901e892c9be86770ef45bf20a228bde915d4703c349e90fb3f" gracePeriod=10 Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.146679 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sf47v"] Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.334157 5050 generic.go:334] "Generic (PLEG): container finished" podID="d544bf99-86ca-41e6-9b6d-c19906cbf426" containerID="7c37ecb75da725901e892c9be86770ef45bf20a228bde915d4703c349e90fb3f" exitCode=0 Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.334226 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" event={"ID":"d544bf99-86ca-41e6-9b6d-c19906cbf426","Type":"ContainerDied","Data":"7c37ecb75da725901e892c9be86770ef45bf20a228bde915d4703c349e90fb3f"} Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.348067 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"781361ec-578f-4ba7-864d-d1913d7714df","Type":"ContainerStarted","Data":"82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64"} Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.357220 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sf47v" event={"ID":"9480c5f7-4801-47d5-abe0-3a7281596b0b","Type":"ContainerStarted","Data":"cc97c0c8720d71f63a1d687947172f7e64ce48dac184475869e92967cc176e20"} Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.453176 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.628276 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-ovsdbserver-sb\") pod \"d544bf99-86ca-41e6-9b6d-c19906cbf426\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.628631 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-ovsdbserver-nb\") pod \"d544bf99-86ca-41e6-9b6d-c19906cbf426\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.628733 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-dns-svc\") pod \"d544bf99-86ca-41e6-9b6d-c19906cbf426\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.628780 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-config\") pod \"d544bf99-86ca-41e6-9b6d-c19906cbf426\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.628812 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z77q\" (UniqueName: \"kubernetes.io/projected/d544bf99-86ca-41e6-9b6d-c19906cbf426-kube-api-access-4z77q\") pod \"d544bf99-86ca-41e6-9b6d-c19906cbf426\" (UID: \"d544bf99-86ca-41e6-9b6d-c19906cbf426\") " Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.657703 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d544bf99-86ca-41e6-9b6d-c19906cbf426-kube-api-access-4z77q" (OuterVolumeSpecName: "kube-api-access-4z77q") pod "d544bf99-86ca-41e6-9b6d-c19906cbf426" (UID: "d544bf99-86ca-41e6-9b6d-c19906cbf426"). InnerVolumeSpecName "kube-api-access-4z77q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.670605 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d544bf99-86ca-41e6-9b6d-c19906cbf426" (UID: "d544bf99-86ca-41e6-9b6d-c19906cbf426"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.679789 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d544bf99-86ca-41e6-9b6d-c19906cbf426" (UID: "d544bf99-86ca-41e6-9b6d-c19906cbf426"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.694813 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-config" (OuterVolumeSpecName: "config") pod "d544bf99-86ca-41e6-9b6d-c19906cbf426" (UID: "d544bf99-86ca-41e6-9b6d-c19906cbf426"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.731671 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.731707 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.731723 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z77q\" (UniqueName: \"kubernetes.io/projected/d544bf99-86ca-41e6-9b6d-c19906cbf426-kube-api-access-4z77q\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.731736 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.734919 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d544bf99-86ca-41e6-9b6d-c19906cbf426" (UID: "d544bf99-86ca-41e6-9b6d-c19906cbf426"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:42:12 crc kubenswrapper[5050]: I0131 05:42:12.833611 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d544bf99-86ca-41e6-9b6d-c19906cbf426-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:13 crc kubenswrapper[5050]: I0131 05:42:13.374394 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"781361ec-578f-4ba7-864d-d1913d7714df","Type":"ContainerStarted","Data":"e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7"} Jan 31 05:42:13 crc kubenswrapper[5050]: I0131 05:42:13.376512 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" event={"ID":"d544bf99-86ca-41e6-9b6d-c19906cbf426","Type":"ContainerDied","Data":"7438ca9c95ef92181723b973549c93bfe09fc73cbf4dc80d3232fba41055f5bd"} Jan 31 05:42:13 crc kubenswrapper[5050]: I0131 05:42:13.376566 5050 scope.go:117] "RemoveContainer" containerID="7c37ecb75da725901e892c9be86770ef45bf20a228bde915d4703c349e90fb3f" Jan 31 05:42:13 crc kubenswrapper[5050]: I0131 05:42:13.376581 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b76cdf485-zw6q9" Jan 31 05:42:13 crc kubenswrapper[5050]: I0131 05:42:13.406318 5050 scope.go:117] "RemoveContainer" containerID="d7bdb3aeff041bb068acbf7609d0ee12898491e73e7ac99471d7ef894b8f0f38" Jan 31 05:42:13 crc kubenswrapper[5050]: I0131 05:42:13.408024 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-zw6q9"] Jan 31 05:42:13 crc kubenswrapper[5050]: I0131 05:42:13.414875 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-zw6q9"] Jan 31 05:42:13 crc kubenswrapper[5050]: I0131 05:42:13.750964 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d544bf99-86ca-41e6-9b6d-c19906cbf426" path="/var/lib/kubelet/pods/d544bf99-86ca-41e6-9b6d-c19906cbf426/volumes" Jan 31 05:42:23 crc kubenswrapper[5050]: I0131 05:42:23.502662 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"781361ec-578f-4ba7-864d-d1913d7714df","Type":"ContainerStarted","Data":"fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0"} Jan 31 05:42:23 crc kubenswrapper[5050]: I0131 05:42:23.503347 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:42:23 crc kubenswrapper[5050]: I0131 05:42:23.507241 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sf47v" event={"ID":"9480c5f7-4801-47d5-abe0-3a7281596b0b","Type":"ContainerStarted","Data":"e2b15e320862bc3eeedeba075a95e6746665e06573dc864e9f3deba129317bf7"} Jan 31 05:42:23 crc kubenswrapper[5050]: I0131 05:42:23.552771 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-sf47v" podStartSLOduration=1.651067495 podStartE2EDuration="12.552748471s" podCreationTimestamp="2026-01-31 05:42:11 +0000 UTC" firstStartedPulling="2026-01-31 05:42:12.155230304 +0000 UTC m=+1257.204391900" lastFinishedPulling="2026-01-31 05:42:23.05691127 +0000 UTC m=+1268.106072876" observedRunningTime="2026-01-31 05:42:23.549180254 +0000 UTC m=+1268.598341890" watchObservedRunningTime="2026-01-31 05:42:23.552748471 +0000 UTC m=+1268.601910107" Jan 31 05:42:23 crc kubenswrapper[5050]: I0131 05:42:23.557559 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.469589897 podStartE2EDuration="14.557539481s" podCreationTimestamp="2026-01-31 05:42:09 +0000 UTC" firstStartedPulling="2026-01-31 05:42:10.19390018 +0000 UTC m=+1255.243061776" lastFinishedPulling="2026-01-31 05:42:21.281849744 +0000 UTC m=+1266.331011360" observedRunningTime="2026-01-31 05:42:23.533202899 +0000 UTC m=+1268.582364505" watchObservedRunningTime="2026-01-31 05:42:23.557539481 +0000 UTC m=+1268.606701117" Jan 31 05:42:26 crc kubenswrapper[5050]: I0131 05:42:26.445719 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:26 crc kubenswrapper[5050]: I0131 05:42:26.446193 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="ceilometer-central-agent" containerID="cri-o://103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5" gracePeriod=30 Jan 31 05:42:26 crc kubenswrapper[5050]: I0131 05:42:26.446307 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="proxy-httpd" containerID="cri-o://fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0" gracePeriod=30 Jan 31 05:42:26 crc kubenswrapper[5050]: I0131 05:42:26.446339 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="sg-core" containerID="cri-o://e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7" gracePeriod=30 Jan 31 05:42:26 crc kubenswrapper[5050]: I0131 05:42:26.446369 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="ceilometer-notification-agent" containerID="cri-o://82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64" gracePeriod=30 Jan 31 05:42:27 crc kubenswrapper[5050]: I0131 05:42:27.543017 5050 generic.go:334] "Generic (PLEG): container finished" podID="781361ec-578f-4ba7-864d-d1913d7714df" containerID="fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0" exitCode=0 Jan 31 05:42:27 crc kubenswrapper[5050]: I0131 05:42:27.543388 5050 generic.go:334] "Generic (PLEG): container finished" podID="781361ec-578f-4ba7-864d-d1913d7714df" containerID="e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7" exitCode=2 Jan 31 05:42:27 crc kubenswrapper[5050]: I0131 05:42:27.543401 5050 generic.go:334] "Generic (PLEG): container finished" podID="781361ec-578f-4ba7-864d-d1913d7714df" containerID="103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5" exitCode=0 Jan 31 05:42:27 crc kubenswrapper[5050]: I0131 05:42:27.543427 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"781361ec-578f-4ba7-864d-d1913d7714df","Type":"ContainerDied","Data":"fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0"} Jan 31 05:42:27 crc kubenswrapper[5050]: I0131 05:42:27.543457 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"781361ec-578f-4ba7-864d-d1913d7714df","Type":"ContainerDied","Data":"e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7"} Jan 31 05:42:27 crc kubenswrapper[5050]: I0131 05:42:27.543471 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"781361ec-578f-4ba7-864d-d1913d7714df","Type":"ContainerDied","Data":"103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5"} Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.412161 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.582500 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b47n5\" (UniqueName: \"kubernetes.io/projected/781361ec-578f-4ba7-864d-d1913d7714df-kube-api-access-b47n5\") pod \"781361ec-578f-4ba7-864d-d1913d7714df\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.582616 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-sg-core-conf-yaml\") pod \"781361ec-578f-4ba7-864d-d1913d7714df\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.582668 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-scripts\") pod \"781361ec-578f-4ba7-864d-d1913d7714df\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.582704 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/781361ec-578f-4ba7-864d-d1913d7714df-run-httpd\") pod \"781361ec-578f-4ba7-864d-d1913d7714df\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.582726 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-combined-ca-bundle\") pod \"781361ec-578f-4ba7-864d-d1913d7714df\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.582755 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/781361ec-578f-4ba7-864d-d1913d7714df-log-httpd\") pod \"781361ec-578f-4ba7-864d-d1913d7714df\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.582797 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-config-data\") pod \"781361ec-578f-4ba7-864d-d1913d7714df\" (UID: \"781361ec-578f-4ba7-864d-d1913d7714df\") " Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.583278 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/781361ec-578f-4ba7-864d-d1913d7714df-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "781361ec-578f-4ba7-864d-d1913d7714df" (UID: "781361ec-578f-4ba7-864d-d1913d7714df"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.583405 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/781361ec-578f-4ba7-864d-d1913d7714df-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "781361ec-578f-4ba7-864d-d1913d7714df" (UID: "781361ec-578f-4ba7-864d-d1913d7714df"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.583460 5050 generic.go:334] "Generic (PLEG): container finished" podID="781361ec-578f-4ba7-864d-d1913d7714df" containerID="82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64" exitCode=0 Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.583497 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"781361ec-578f-4ba7-864d-d1913d7714df","Type":"ContainerDied","Data":"82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64"} Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.583525 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.583548 5050 scope.go:117] "RemoveContainer" containerID="fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.583533 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"781361ec-578f-4ba7-864d-d1913d7714df","Type":"ContainerDied","Data":"9db59275311d05d7195a58e1c577b5d3f172b9926c0a1bdd40a1fd7e9042f7ff"} Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.589074 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-scripts" (OuterVolumeSpecName: "scripts") pod "781361ec-578f-4ba7-864d-d1913d7714df" (UID: "781361ec-578f-4ba7-864d-d1913d7714df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.597514 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781361ec-578f-4ba7-864d-d1913d7714df-kube-api-access-b47n5" (OuterVolumeSpecName: "kube-api-access-b47n5") pod "781361ec-578f-4ba7-864d-d1913d7714df" (UID: "781361ec-578f-4ba7-864d-d1913d7714df"). InnerVolumeSpecName "kube-api-access-b47n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.625766 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "781361ec-578f-4ba7-864d-d1913d7714df" (UID: "781361ec-578f-4ba7-864d-d1913d7714df"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.659317 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "781361ec-578f-4ba7-864d-d1913d7714df" (UID: "781361ec-578f-4ba7-864d-d1913d7714df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.684683 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.684721 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.685070 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/781361ec-578f-4ba7-864d-d1913d7714df-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.685084 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.685097 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/781361ec-578f-4ba7-864d-d1913d7714df-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.685109 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b47n5\" (UniqueName: \"kubernetes.io/projected/781361ec-578f-4ba7-864d-d1913d7714df-kube-api-access-b47n5\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.721836 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-config-data" (OuterVolumeSpecName: "config-data") pod "781361ec-578f-4ba7-864d-d1913d7714df" (UID: "781361ec-578f-4ba7-864d-d1913d7714df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.724747 5050 scope.go:117] "RemoveContainer" containerID="e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.748158 5050 scope.go:117] "RemoveContainer" containerID="82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.771634 5050 scope.go:117] "RemoveContainer" containerID="103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.786377 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781361ec-578f-4ba7-864d-d1913d7714df-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.807363 5050 scope.go:117] "RemoveContainer" containerID="fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0" Jan 31 05:42:31 crc kubenswrapper[5050]: E0131 05:42:31.808543 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0\": container with ID starting with fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0 not found: ID does not exist" containerID="fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.808585 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0"} err="failed to get container status \"fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0\": rpc error: code = NotFound desc = could not find container \"fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0\": container with ID starting with fa2ada87e459d8b7e4f76f1461e2008b0557a9c48fbe9c8ca70b6365e9fcbac0 not found: ID does not exist" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.808608 5050 scope.go:117] "RemoveContainer" containerID="e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7" Jan 31 05:42:31 crc kubenswrapper[5050]: E0131 05:42:31.810552 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7\": container with ID starting with e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7 not found: ID does not exist" containerID="e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.810586 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7"} err="failed to get container status \"e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7\": rpc error: code = NotFound desc = could not find container \"e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7\": container with ID starting with e9b13301da19938ee9cc20e511d24782cae3356090211ed32b70c0440b065cb7 not found: ID does not exist" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.810615 5050 scope.go:117] "RemoveContainer" containerID="82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64" Jan 31 05:42:31 crc kubenswrapper[5050]: E0131 05:42:31.812120 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64\": container with ID starting with 82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64 not found: ID does not exist" containerID="82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.812158 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64"} err="failed to get container status \"82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64\": rpc error: code = NotFound desc = could not find container \"82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64\": container with ID starting with 82151da08a6920f3eace9c97860a0b0960482e9ba066dc8253e47bf8114dbd64 not found: ID does not exist" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.812177 5050 scope.go:117] "RemoveContainer" containerID="103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5" Jan 31 05:42:31 crc kubenswrapper[5050]: E0131 05:42:31.813479 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5\": container with ID starting with 103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5 not found: ID does not exist" containerID="103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.813521 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5"} err="failed to get container status \"103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5\": rpc error: code = NotFound desc = could not find container \"103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5\": container with ID starting with 103ef58db812bdcf74580c1c82f4a996ceeee134b2914d304d0547ca9d15bcb5 not found: ID does not exist" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.904696 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.913268 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.935492 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:31 crc kubenswrapper[5050]: E0131 05:42:31.935899 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d544bf99-86ca-41e6-9b6d-c19906cbf426" containerName="init" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.935916 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d544bf99-86ca-41e6-9b6d-c19906cbf426" containerName="init" Jan 31 05:42:31 crc kubenswrapper[5050]: E0131 05:42:31.935935 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="ceilometer-notification-agent" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.935941 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="ceilometer-notification-agent" Jan 31 05:42:31 crc kubenswrapper[5050]: E0131 05:42:31.935978 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="proxy-httpd" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.935985 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="proxy-httpd" Jan 31 05:42:31 crc kubenswrapper[5050]: E0131 05:42:31.935999 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="sg-core" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.936005 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="sg-core" Jan 31 05:42:31 crc kubenswrapper[5050]: E0131 05:42:31.936021 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="ceilometer-central-agent" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.936028 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="ceilometer-central-agent" Jan 31 05:42:31 crc kubenswrapper[5050]: E0131 05:42:31.936038 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d544bf99-86ca-41e6-9b6d-c19906cbf426" containerName="dnsmasq-dns" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.936044 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d544bf99-86ca-41e6-9b6d-c19906cbf426" containerName="dnsmasq-dns" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.936200 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="ceilometer-notification-agent" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.936212 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d544bf99-86ca-41e6-9b6d-c19906cbf426" containerName="dnsmasq-dns" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.936221 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="proxy-httpd" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.936234 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="sg-core" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.936243 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="781361ec-578f-4ba7-864d-d1913d7714df" containerName="ceilometer-central-agent" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.937686 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.940076 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.940346 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:42:31 crc kubenswrapper[5050]: I0131 05:42:31.955823 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.043887 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.092046 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxwl\" (UniqueName: \"kubernetes.io/projected/0a2df591-5733-483f-b212-1a8e5608b5c9-kube-api-access-qhxwl\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.092099 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a2df591-5733-483f-b212-1a8e5608b5c9-log-httpd\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.092125 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-config-data\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.092159 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-scripts\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.092180 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.092197 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.092265 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a2df591-5733-483f-b212-1a8e5608b5c9-run-httpd\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.193482 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a2df591-5733-483f-b212-1a8e5608b5c9-log-httpd\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.193529 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-config-data\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.193576 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-scripts\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.193599 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.193617 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.193715 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a2df591-5733-483f-b212-1a8e5608b5c9-run-httpd\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.193747 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhxwl\" (UniqueName: \"kubernetes.io/projected/0a2df591-5733-483f-b212-1a8e5608b5c9-kube-api-access-qhxwl\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.194432 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a2df591-5733-483f-b212-1a8e5608b5c9-log-httpd\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.197180 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a2df591-5733-483f-b212-1a8e5608b5c9-run-httpd\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.198830 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-config-data\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.199290 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.200500 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-scripts\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.200786 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.211260 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhxwl\" (UniqueName: \"kubernetes.io/projected/0a2df591-5733-483f-b212-1a8e5608b5c9-kube-api-access-qhxwl\") pod \"ceilometer-0\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.252934 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.655713 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:32 crc kubenswrapper[5050]: I0131 05:42:32.685872 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:32 crc kubenswrapper[5050]: W0131 05:42:32.708485 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a2df591_5733_483f_b212_1a8e5608b5c9.slice/crio-9316c472563e89bf0bdd779b4bd82813b9e01206364446ecaaf91b3783a8f9d9 WatchSource:0}: Error finding container 9316c472563e89bf0bdd779b4bd82813b9e01206364446ecaaf91b3783a8f9d9: Status 404 returned error can't find the container with id 9316c472563e89bf0bdd779b4bd82813b9e01206364446ecaaf91b3783a8f9d9 Jan 31 05:42:33 crc kubenswrapper[5050]: I0131 05:42:33.602794 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a2df591-5733-483f-b212-1a8e5608b5c9","Type":"ContainerStarted","Data":"9316c472563e89bf0bdd779b4bd82813b9e01206364446ecaaf91b3783a8f9d9"} Jan 31 05:42:33 crc kubenswrapper[5050]: I0131 05:42:33.747428 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781361ec-578f-4ba7-864d-d1913d7714df" path="/var/lib/kubelet/pods/781361ec-578f-4ba7-864d-d1913d7714df/volumes" Jan 31 05:42:34 crc kubenswrapper[5050]: I0131 05:42:34.326845 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86dbc7dc8f-2zfkt" Jan 31 05:42:34 crc kubenswrapper[5050]: I0131 05:42:34.409097 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cc6759b56-tdkxj"] Jan 31 05:42:34 crc kubenswrapper[5050]: I0131 05:42:34.409368 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cc6759b56-tdkxj" podUID="1843d770-24a2-4dbd-bf4d-236aab2a27ca" containerName="neutron-api" containerID="cri-o://6a01325ab7e2d50cbeb3ed0500c6ca3f37e0df1aef683d91db10d57575ba8b35" gracePeriod=30 Jan 31 05:42:34 crc kubenswrapper[5050]: I0131 05:42:34.409663 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cc6759b56-tdkxj" podUID="1843d770-24a2-4dbd-bf4d-236aab2a27ca" containerName="neutron-httpd" containerID="cri-o://5a6d399784e94e65e82455e1c1259b465602ddbc4089f1e0eee25f38e82389f9" gracePeriod=30 Jan 31 05:42:34 crc kubenswrapper[5050]: I0131 05:42:34.612027 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a2df591-5733-483f-b212-1a8e5608b5c9","Type":"ContainerStarted","Data":"7fc86d85066912b19b0f268a0e491210d074ed6224809fe8c66397dee72282f1"} Jan 31 05:42:35 crc kubenswrapper[5050]: I0131 05:42:35.622851 5050 generic.go:334] "Generic (PLEG): container finished" podID="1843d770-24a2-4dbd-bf4d-236aab2a27ca" containerID="5a6d399784e94e65e82455e1c1259b465602ddbc4089f1e0eee25f38e82389f9" exitCode=0 Jan 31 05:42:35 crc kubenswrapper[5050]: I0131 05:42:35.622896 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cc6759b56-tdkxj" event={"ID":"1843d770-24a2-4dbd-bf4d-236aab2a27ca","Type":"ContainerDied","Data":"5a6d399784e94e65e82455e1c1259b465602ddbc4089f1e0eee25f38e82389f9"} Jan 31 05:42:42 crc kubenswrapper[5050]: I0131 05:42:42.278266 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="7b9ed42c-b571-4eec-b45d-802eaa8cf8b7" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.156:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 05:42:44 crc kubenswrapper[5050]: I0131 05:42:44.763633 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a2df591-5733-483f-b212-1a8e5608b5c9","Type":"ContainerStarted","Data":"e68a9b7f9975ad8786cda6762f8d7af3a0a56544f929c4123fc78d243bd41d9f"} Jan 31 05:42:44 crc kubenswrapper[5050]: I0131 05:42:44.764325 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a2df591-5733-483f-b212-1a8e5608b5c9","Type":"ContainerStarted","Data":"8432c306049230668e51a04093623335b8f9245cc2a2e34d9cc5ff0209ab360d"} Jan 31 05:42:46 crc kubenswrapper[5050]: I0131 05:42:46.797748 5050 generic.go:334] "Generic (PLEG): container finished" podID="1843d770-24a2-4dbd-bf4d-236aab2a27ca" containerID="6a01325ab7e2d50cbeb3ed0500c6ca3f37e0df1aef683d91db10d57575ba8b35" exitCode=0 Jan 31 05:42:46 crc kubenswrapper[5050]: I0131 05:42:46.798114 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cc6759b56-tdkxj" event={"ID":"1843d770-24a2-4dbd-bf4d-236aab2a27ca","Type":"ContainerDied","Data":"6a01325ab7e2d50cbeb3ed0500c6ca3f37e0df1aef683d91db10d57575ba8b35"} Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.030353 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.197724 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-ovndb-tls-certs\") pod \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.198241 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-combined-ca-bundle\") pod \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.198350 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-config\") pod \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.198411 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-httpd-config\") pod \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.198463 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rmkv\" (UniqueName: \"kubernetes.io/projected/1843d770-24a2-4dbd-bf4d-236aab2a27ca-kube-api-access-5rmkv\") pod \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\" (UID: \"1843d770-24a2-4dbd-bf4d-236aab2a27ca\") " Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.204086 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1843d770-24a2-4dbd-bf4d-236aab2a27ca" (UID: "1843d770-24a2-4dbd-bf4d-236aab2a27ca"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.204148 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1843d770-24a2-4dbd-bf4d-236aab2a27ca-kube-api-access-5rmkv" (OuterVolumeSpecName: "kube-api-access-5rmkv") pod "1843d770-24a2-4dbd-bf4d-236aab2a27ca" (UID: "1843d770-24a2-4dbd-bf4d-236aab2a27ca"). InnerVolumeSpecName "kube-api-access-5rmkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.256701 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1843d770-24a2-4dbd-bf4d-236aab2a27ca" (UID: "1843d770-24a2-4dbd-bf4d-236aab2a27ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.282101 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1843d770-24a2-4dbd-bf4d-236aab2a27ca" (UID: "1843d770-24a2-4dbd-bf4d-236aab2a27ca"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.285224 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-config" (OuterVolumeSpecName: "config") pod "1843d770-24a2-4dbd-bf4d-236aab2a27ca" (UID: "1843d770-24a2-4dbd-bf4d-236aab2a27ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.300824 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.300866 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.300881 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.300892 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rmkv\" (UniqueName: \"kubernetes.io/projected/1843d770-24a2-4dbd-bf4d-236aab2a27ca-kube-api-access-5rmkv\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.300904 5050 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1843d770-24a2-4dbd-bf4d-236aab2a27ca-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.809733 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a2df591-5733-483f-b212-1a8e5608b5c9","Type":"ContainerStarted","Data":"2a268f99e0dc97d819a4213b019b5fbc23ff156a4270deb70e399c6a80560ea4"} Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.809933 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="ceilometer-central-agent" containerID="cri-o://7fc86d85066912b19b0f268a0e491210d074ed6224809fe8c66397dee72282f1" gracePeriod=30 Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.809995 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.810119 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="proxy-httpd" containerID="cri-o://2a268f99e0dc97d819a4213b019b5fbc23ff156a4270deb70e399c6a80560ea4" gracePeriod=30 Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.810182 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="sg-core" containerID="cri-o://e68a9b7f9975ad8786cda6762f8d7af3a0a56544f929c4123fc78d243bd41d9f" gracePeriod=30 Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.810235 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="ceilometer-notification-agent" containerID="cri-o://8432c306049230668e51a04093623335b8f9245cc2a2e34d9cc5ff0209ab360d" gracePeriod=30 Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.812932 5050 generic.go:334] "Generic (PLEG): container finished" podID="9480c5f7-4801-47d5-abe0-3a7281596b0b" containerID="e2b15e320862bc3eeedeba075a95e6746665e06573dc864e9f3deba129317bf7" exitCode=0 Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.813275 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sf47v" event={"ID":"9480c5f7-4801-47d5-abe0-3a7281596b0b","Type":"ContainerDied","Data":"e2b15e320862bc3eeedeba075a95e6746665e06573dc864e9f3deba129317bf7"} Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.825695 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cc6759b56-tdkxj" event={"ID":"1843d770-24a2-4dbd-bf4d-236aab2a27ca","Type":"ContainerDied","Data":"c635e4689718ddbdb1c57a1fa94f442ed6ae65dec539b7177faf07c8b2a24837"} Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.825763 5050 scope.go:117] "RemoveContainer" containerID="5a6d399784e94e65e82455e1c1259b465602ddbc4089f1e0eee25f38e82389f9" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.825803 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cc6759b56-tdkxj" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.849228 5050 scope.go:117] "RemoveContainer" containerID="6a01325ab7e2d50cbeb3ed0500c6ca3f37e0df1aef683d91db10d57575ba8b35" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.851167 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.742053494 podStartE2EDuration="16.851141447s" podCreationTimestamp="2026-01-31 05:42:31 +0000 UTC" firstStartedPulling="2026-01-31 05:42:32.71060222 +0000 UTC m=+1277.759763826" lastFinishedPulling="2026-01-31 05:42:46.819690183 +0000 UTC m=+1291.868851779" observedRunningTime="2026-01-31 05:42:47.839584382 +0000 UTC m=+1292.888746008" watchObservedRunningTime="2026-01-31 05:42:47.851141447 +0000 UTC m=+1292.900303053" Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.878982 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cc6759b56-tdkxj"] Jan 31 05:42:47 crc kubenswrapper[5050]: I0131 05:42:47.888782 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5cc6759b56-tdkxj"] Jan 31 05:42:48 crc kubenswrapper[5050]: I0131 05:42:48.845719 5050 generic.go:334] "Generic (PLEG): container finished" podID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerID="2a268f99e0dc97d819a4213b019b5fbc23ff156a4270deb70e399c6a80560ea4" exitCode=0 Jan 31 05:42:48 crc kubenswrapper[5050]: I0131 05:42:48.845748 5050 generic.go:334] "Generic (PLEG): container finished" podID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerID="e68a9b7f9975ad8786cda6762f8d7af3a0a56544f929c4123fc78d243bd41d9f" exitCode=2 Jan 31 05:42:48 crc kubenswrapper[5050]: I0131 05:42:48.845755 5050 generic.go:334] "Generic (PLEG): container finished" podID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerID="8432c306049230668e51a04093623335b8f9245cc2a2e34d9cc5ff0209ab360d" exitCode=0 Jan 31 05:42:48 crc kubenswrapper[5050]: I0131 05:42:48.845762 5050 generic.go:334] "Generic (PLEG): container finished" podID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerID="7fc86d85066912b19b0f268a0e491210d074ed6224809fe8c66397dee72282f1" exitCode=0 Jan 31 05:42:48 crc kubenswrapper[5050]: I0131 05:42:48.845923 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a2df591-5733-483f-b212-1a8e5608b5c9","Type":"ContainerDied","Data":"2a268f99e0dc97d819a4213b019b5fbc23ff156a4270deb70e399c6a80560ea4"} Jan 31 05:42:48 crc kubenswrapper[5050]: I0131 05:42:48.846061 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a2df591-5733-483f-b212-1a8e5608b5c9","Type":"ContainerDied","Data":"e68a9b7f9975ad8786cda6762f8d7af3a0a56544f929c4123fc78d243bd41d9f"} Jan 31 05:42:48 crc kubenswrapper[5050]: I0131 05:42:48.846485 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a2df591-5733-483f-b212-1a8e5608b5c9","Type":"ContainerDied","Data":"8432c306049230668e51a04093623335b8f9245cc2a2e34d9cc5ff0209ab360d"} Jan 31 05:42:48 crc kubenswrapper[5050]: I0131 05:42:48.846534 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a2df591-5733-483f-b212-1a8e5608b5c9","Type":"ContainerDied","Data":"7fc86d85066912b19b0f268a0e491210d074ed6224809fe8c66397dee72282f1"} Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.012117 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.141599 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-scripts\") pod \"0a2df591-5733-483f-b212-1a8e5608b5c9\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.141692 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a2df591-5733-483f-b212-1a8e5608b5c9-log-httpd\") pod \"0a2df591-5733-483f-b212-1a8e5608b5c9\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.141774 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-combined-ca-bundle\") pod \"0a2df591-5733-483f-b212-1a8e5608b5c9\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.141819 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-config-data\") pod \"0a2df591-5733-483f-b212-1a8e5608b5c9\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.141848 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-sg-core-conf-yaml\") pod \"0a2df591-5733-483f-b212-1a8e5608b5c9\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.141876 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhxwl\" (UniqueName: \"kubernetes.io/projected/0a2df591-5733-483f-b212-1a8e5608b5c9-kube-api-access-qhxwl\") pod \"0a2df591-5733-483f-b212-1a8e5608b5c9\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.141895 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a2df591-5733-483f-b212-1a8e5608b5c9-run-httpd\") pod \"0a2df591-5733-483f-b212-1a8e5608b5c9\" (UID: \"0a2df591-5733-483f-b212-1a8e5608b5c9\") " Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.142615 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a2df591-5733-483f-b212-1a8e5608b5c9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0a2df591-5733-483f-b212-1a8e5608b5c9" (UID: "0a2df591-5733-483f-b212-1a8e5608b5c9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.144645 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a2df591-5733-483f-b212-1a8e5608b5c9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0a2df591-5733-483f-b212-1a8e5608b5c9" (UID: "0a2df591-5733-483f-b212-1a8e5608b5c9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.149081 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2df591-5733-483f-b212-1a8e5608b5c9-kube-api-access-qhxwl" (OuterVolumeSpecName: "kube-api-access-qhxwl") pod "0a2df591-5733-483f-b212-1a8e5608b5c9" (UID: "0a2df591-5733-483f-b212-1a8e5608b5c9"). InnerVolumeSpecName "kube-api-access-qhxwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.149585 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-scripts" (OuterVolumeSpecName: "scripts") pod "0a2df591-5733-483f-b212-1a8e5608b5c9" (UID: "0a2df591-5733-483f-b212-1a8e5608b5c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.174767 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0a2df591-5733-483f-b212-1a8e5608b5c9" (UID: "0a2df591-5733-483f-b212-1a8e5608b5c9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.212940 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a2df591-5733-483f-b212-1a8e5608b5c9" (UID: "0a2df591-5733-483f-b212-1a8e5608b5c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.215890 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.244477 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a2df591-5733-483f-b212-1a8e5608b5c9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.244510 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.244522 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.244531 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhxwl\" (UniqueName: \"kubernetes.io/projected/0a2df591-5733-483f-b212-1a8e5608b5c9-kube-api-access-qhxwl\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.244541 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a2df591-5733-483f-b212-1a8e5608b5c9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.244549 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.257511 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-config-data" (OuterVolumeSpecName: "config-data") pod "0a2df591-5733-483f-b212-1a8e5608b5c9" (UID: "0a2df591-5733-483f-b212-1a8e5608b5c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.345503 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-combined-ca-bundle\") pod \"9480c5f7-4801-47d5-abe0-3a7281596b0b\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.345555 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q47z9\" (UniqueName: \"kubernetes.io/projected/9480c5f7-4801-47d5-abe0-3a7281596b0b-kube-api-access-q47z9\") pod \"9480c5f7-4801-47d5-abe0-3a7281596b0b\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.345614 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-scripts\") pod \"9480c5f7-4801-47d5-abe0-3a7281596b0b\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.345684 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-config-data\") pod \"9480c5f7-4801-47d5-abe0-3a7281596b0b\" (UID: \"9480c5f7-4801-47d5-abe0-3a7281596b0b\") " Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.345993 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a2df591-5733-483f-b212-1a8e5608b5c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.349401 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9480c5f7-4801-47d5-abe0-3a7281596b0b-kube-api-access-q47z9" (OuterVolumeSpecName: "kube-api-access-q47z9") pod "9480c5f7-4801-47d5-abe0-3a7281596b0b" (UID: "9480c5f7-4801-47d5-abe0-3a7281596b0b"). InnerVolumeSpecName "kube-api-access-q47z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.360900 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-scripts" (OuterVolumeSpecName: "scripts") pod "9480c5f7-4801-47d5-abe0-3a7281596b0b" (UID: "9480c5f7-4801-47d5-abe0-3a7281596b0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.374564 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-config-data" (OuterVolumeSpecName: "config-data") pod "9480c5f7-4801-47d5-abe0-3a7281596b0b" (UID: "9480c5f7-4801-47d5-abe0-3a7281596b0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.375134 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9480c5f7-4801-47d5-abe0-3a7281596b0b" (UID: "9480c5f7-4801-47d5-abe0-3a7281596b0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.448156 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.448191 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.448204 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9480c5f7-4801-47d5-abe0-3a7281596b0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.448216 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q47z9\" (UniqueName: \"kubernetes.io/projected/9480c5f7-4801-47d5-abe0-3a7281596b0b-kube-api-access-q47z9\") on node \"crc\" DevicePath \"\"" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.749903 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1843d770-24a2-4dbd-bf4d-236aab2a27ca" path="/var/lib/kubelet/pods/1843d770-24a2-4dbd-bf4d-236aab2a27ca/volumes" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.860330 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a2df591-5733-483f-b212-1a8e5608b5c9","Type":"ContainerDied","Data":"9316c472563e89bf0bdd779b4bd82813b9e01206364446ecaaf91b3783a8f9d9"} Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.860447 5050 scope.go:117] "RemoveContainer" containerID="2a268f99e0dc97d819a4213b019b5fbc23ff156a4270deb70e399c6a80560ea4" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.860372 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.862944 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sf47v" event={"ID":"9480c5f7-4801-47d5-abe0-3a7281596b0b","Type":"ContainerDied","Data":"cc97c0c8720d71f63a1d687947172f7e64ce48dac184475869e92967cc176e20"} Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.863058 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc97c0c8720d71f63a1d687947172f7e64ce48dac184475869e92967cc176e20" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.863147 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sf47v" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.892422 5050 scope.go:117] "RemoveContainer" containerID="e68a9b7f9975ad8786cda6762f8d7af3a0a56544f929c4123fc78d243bd41d9f" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.897010 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.916003 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.931139 5050 scope.go:117] "RemoveContainer" containerID="8432c306049230668e51a04093623335b8f9245cc2a2e34d9cc5ff0209ab360d" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.937379 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:49 crc kubenswrapper[5050]: E0131 05:42:49.937718 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="sg-core" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.937736 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="sg-core" Jan 31 05:42:49 crc kubenswrapper[5050]: E0131 05:42:49.937754 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="ceilometer-notification-agent" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.937761 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="ceilometer-notification-agent" Jan 31 05:42:49 crc kubenswrapper[5050]: E0131 05:42:49.937772 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="proxy-httpd" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.937779 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="proxy-httpd" Jan 31 05:42:49 crc kubenswrapper[5050]: E0131 05:42:49.937792 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9480c5f7-4801-47d5-abe0-3a7281596b0b" containerName="nova-cell0-conductor-db-sync" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.937797 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="9480c5f7-4801-47d5-abe0-3a7281596b0b" containerName="nova-cell0-conductor-db-sync" Jan 31 05:42:49 crc kubenswrapper[5050]: E0131 05:42:49.937805 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1843d770-24a2-4dbd-bf4d-236aab2a27ca" containerName="neutron-httpd" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.937810 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1843d770-24a2-4dbd-bf4d-236aab2a27ca" containerName="neutron-httpd" Jan 31 05:42:49 crc kubenswrapper[5050]: E0131 05:42:49.937825 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="ceilometer-central-agent" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.937831 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="ceilometer-central-agent" Jan 31 05:42:49 crc kubenswrapper[5050]: E0131 05:42:49.937845 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1843d770-24a2-4dbd-bf4d-236aab2a27ca" containerName="neutron-api" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.937853 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1843d770-24a2-4dbd-bf4d-236aab2a27ca" containerName="neutron-api" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.938013 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="sg-core" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.938028 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="proxy-httpd" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.938037 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1843d770-24a2-4dbd-bf4d-236aab2a27ca" containerName="neutron-httpd" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.938049 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1843d770-24a2-4dbd-bf4d-236aab2a27ca" containerName="neutron-api" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.938060 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="ceilometer-central-agent" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.938070 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="9480c5f7-4801-47d5-abe0-3a7281596b0b" containerName="nova-cell0-conductor-db-sync" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.938078 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" containerName="ceilometer-notification-agent" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.939552 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.946752 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.946836 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.958876 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36143374-28fb-4560-97c8-11f65509228e-log-httpd\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.959025 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvdg4\" (UniqueName: \"kubernetes.io/projected/36143374-28fb-4560-97c8-11f65509228e-kube-api-access-cvdg4\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.959053 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.959079 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-config-data\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.959123 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36143374-28fb-4560-97c8-11f65509228e-run-httpd\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.959195 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-scripts\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.959221 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.960161 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:49 crc kubenswrapper[5050]: I0131 05:42:49.974028 5050 scope.go:117] "RemoveContainer" containerID="7fc86d85066912b19b0f268a0e491210d074ed6224809fe8c66397dee72282f1" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.041447 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.042428 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.045630 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mccqk" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.047117 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.060827 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36143374-28fb-4560-97c8-11f65509228e-run-httpd\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.061003 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-scripts\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.061046 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e9f0f0-1757-4e0c-b6c1-289c93df190b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e5e9f0f0-1757-4e0c-b6c1-289c93df190b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.061090 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.061203 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36143374-28fb-4560-97c8-11f65509228e-log-httpd\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.061286 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e9f0f0-1757-4e0c-b6c1-289c93df190b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e5e9f0f0-1757-4e0c-b6c1-289c93df190b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.061349 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvdg4\" (UniqueName: \"kubernetes.io/projected/36143374-28fb-4560-97c8-11f65509228e-kube-api-access-cvdg4\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.061393 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrhvs\" (UniqueName: \"kubernetes.io/projected/e5e9f0f0-1757-4e0c-b6c1-289c93df190b-kube-api-access-hrhvs\") pod \"nova-cell0-conductor-0\" (UID: \"e5e9f0f0-1757-4e0c-b6c1-289c93df190b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.061437 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.061486 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-config-data\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.064446 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36143374-28fb-4560-97c8-11f65509228e-run-httpd\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.064576 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36143374-28fb-4560-97c8-11f65509228e-log-httpd\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.066364 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-scripts\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.067711 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.075178 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-config-data\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.083727 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.090058 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvdg4\" (UniqueName: \"kubernetes.io/projected/36143374-28fb-4560-97c8-11f65509228e-kube-api-access-cvdg4\") pod \"ceilometer-0\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.105943 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.163018 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e9f0f0-1757-4e0c-b6c1-289c93df190b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e5e9f0f0-1757-4e0c-b6c1-289c93df190b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.163104 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e9f0f0-1757-4e0c-b6c1-289c93df190b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e5e9f0f0-1757-4e0c-b6c1-289c93df190b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.163131 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrhvs\" (UniqueName: \"kubernetes.io/projected/e5e9f0f0-1757-4e0c-b6c1-289c93df190b-kube-api-access-hrhvs\") pod \"nova-cell0-conductor-0\" (UID: \"e5e9f0f0-1757-4e0c-b6c1-289c93df190b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.166239 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e9f0f0-1757-4e0c-b6c1-289c93df190b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e5e9f0f0-1757-4e0c-b6c1-289c93df190b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.166552 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e9f0f0-1757-4e0c-b6c1-289c93df190b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e5e9f0f0-1757-4e0c-b6c1-289c93df190b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.180981 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrhvs\" (UniqueName: \"kubernetes.io/projected/e5e9f0f0-1757-4e0c-b6c1-289c93df190b-kube-api-access-hrhvs\") pod \"nova-cell0-conductor-0\" (UID: \"e5e9f0f0-1757-4e0c-b6c1-289c93df190b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.262315 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.360424 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 05:42:50 crc kubenswrapper[5050]: W0131 05:42:50.748831 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36143374_28fb_4560_97c8_11f65509228e.slice/crio-9f3b0446c7f7ca0759ee0a2c76f15e55759c2a4652e8d3993e102352e5164ca7 WatchSource:0}: Error finding container 9f3b0446c7f7ca0759ee0a2c76f15e55759c2a4652e8d3993e102352e5164ca7: Status 404 returned error can't find the container with id 9f3b0446c7f7ca0759ee0a2c76f15e55759c2a4652e8d3993e102352e5164ca7 Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.758147 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:42:50 crc kubenswrapper[5050]: W0131 05:42:50.760841 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e9f0f0_1757_4e0c_b6c1_289c93df190b.slice/crio-65c5e623c729039a21de65a169d7ea65aa1f747f88a107012cad4f412d87b37a WatchSource:0}: Error finding container 65c5e623c729039a21de65a169d7ea65aa1f747f88a107012cad4f412d87b37a: Status 404 returned error can't find the container with id 65c5e623c729039a21de65a169d7ea65aa1f747f88a107012cad4f412d87b37a Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.768685 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.871970 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36143374-28fb-4560-97c8-11f65509228e","Type":"ContainerStarted","Data":"9f3b0446c7f7ca0759ee0a2c76f15e55759c2a4652e8d3993e102352e5164ca7"} Jan 31 05:42:50 crc kubenswrapper[5050]: I0131 05:42:50.875621 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5e9f0f0-1757-4e0c-b6c1-289c93df190b","Type":"ContainerStarted","Data":"65c5e623c729039a21de65a169d7ea65aa1f747f88a107012cad4f412d87b37a"} Jan 31 05:42:51 crc kubenswrapper[5050]: I0131 05:42:51.758330 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2df591-5733-483f-b212-1a8e5608b5c9" path="/var/lib/kubelet/pods/0a2df591-5733-483f-b212-1a8e5608b5c9/volumes" Jan 31 05:42:51 crc kubenswrapper[5050]: I0131 05:42:51.903581 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36143374-28fb-4560-97c8-11f65509228e","Type":"ContainerStarted","Data":"f23e34a4261dda895547ff3c4461dcacd4af397476c9e324ab03d89b398ae469"} Jan 31 05:42:51 crc kubenswrapper[5050]: I0131 05:42:51.905366 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5e9f0f0-1757-4e0c-b6c1-289c93df190b","Type":"ContainerStarted","Data":"9750b40dd1aa0dae8d987c80de380e8810082991b973b66577496c5b11f9e52f"} Jan 31 05:42:51 crc kubenswrapper[5050]: I0131 05:42:51.906343 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 31 05:42:51 crc kubenswrapper[5050]: I0131 05:42:51.929498 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.9294816209999999 podStartE2EDuration="1.929481621s" podCreationTimestamp="2026-01-31 05:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:42:51.92612761 +0000 UTC m=+1296.975289206" watchObservedRunningTime="2026-01-31 05:42:51.929481621 +0000 UTC m=+1296.978643207" Jan 31 05:42:52 crc kubenswrapper[5050]: I0131 05:42:52.916785 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36143374-28fb-4560-97c8-11f65509228e","Type":"ContainerStarted","Data":"c375e0d99eed681e5e248d40878ee54b0ccb5f36c4ad918a437e25ae9612bab7"} Jan 31 05:42:52 crc kubenswrapper[5050]: I0131 05:42:52.917289 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36143374-28fb-4560-97c8-11f65509228e","Type":"ContainerStarted","Data":"e2686047e3546949020b7d93ad7197b6de81984c54bddb2b19cbbb46af6bac1f"} Jan 31 05:42:56 crc kubenswrapper[5050]: I0131 05:42:56.966396 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36143374-28fb-4560-97c8-11f65509228e","Type":"ContainerStarted","Data":"a73f158801d8f0e89fec5036782ccedc80038638f5eb3c1df68ee1ed09335db2"} Jan 31 05:42:56 crc kubenswrapper[5050]: I0131 05:42:56.967042 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:42:57 crc kubenswrapper[5050]: I0131 05:42:57.028670 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.05144057 podStartE2EDuration="8.02864335s" podCreationTimestamp="2026-01-31 05:42:49 +0000 UTC" firstStartedPulling="2026-01-31 05:42:50.751161791 +0000 UTC m=+1295.800323387" lastFinishedPulling="2026-01-31 05:42:55.728364571 +0000 UTC m=+1300.777526167" observedRunningTime="2026-01-31 05:42:57.011872044 +0000 UTC m=+1302.061033650" watchObservedRunningTime="2026-01-31 05:42:57.02864335 +0000 UTC m=+1302.077804976" Jan 31 05:43:00 crc kubenswrapper[5050]: I0131 05:43:00.386186 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 31 05:43:00 crc kubenswrapper[5050]: I0131 05:43:00.842181 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-75rfw"] Jan 31 05:43:00 crc kubenswrapper[5050]: I0131 05:43:00.843348 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:00 crc kubenswrapper[5050]: I0131 05:43:00.846442 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 31 05:43:00 crc kubenswrapper[5050]: I0131 05:43:00.846732 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 31 05:43:00 crc kubenswrapper[5050]: I0131 05:43:00.865399 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-75rfw"] Jan 31 05:43:00 crc kubenswrapper[5050]: I0131 05:43:00.979537 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-75rfw\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:00 crc kubenswrapper[5050]: I0131 05:43:00.979618 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-config-data\") pod \"nova-cell0-cell-mapping-75rfw\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:00 crc kubenswrapper[5050]: I0131 05:43:00.979674 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kk2l\" (UniqueName: \"kubernetes.io/projected/18c98739-a178-40c1-94b1-a60d20b26f6e-kube-api-access-7kk2l\") pod \"nova-cell0-cell-mapping-75rfw\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:00 crc kubenswrapper[5050]: I0131 05:43:00.979811 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-scripts\") pod \"nova-cell0-cell-mapping-75rfw\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.029403 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.030568 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.034381 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.039484 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.068760 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.070098 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.074238 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.081965 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-75rfw\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.082010 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-config-data\") pod \"nova-cell0-cell-mapping-75rfw\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.082044 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kk2l\" (UniqueName: \"kubernetes.io/projected/18c98739-a178-40c1-94b1-a60d20b26f6e-kube-api-access-7kk2l\") pod \"nova-cell0-cell-mapping-75rfw\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.082080 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-scripts\") pod \"nova-cell0-cell-mapping-75rfw\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.093667 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-75rfw\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.101000 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-config-data\") pod \"nova-cell0-cell-mapping-75rfw\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.110833 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.111583 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-scripts\") pod \"nova-cell0-cell-mapping-75rfw\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.118699 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kk2l\" (UniqueName: \"kubernetes.io/projected/18c98739-a178-40c1-94b1-a60d20b26f6e-kube-api-access-7kk2l\") pod \"nova-cell0-cell-mapping-75rfw\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.169542 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-t5fw8"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.184498 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.189500 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ca4c937-3ee5-4bca-b640-c19715cc2900-logs\") pod \"nova-metadata-0\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.189625 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psnhj\" (UniqueName: \"kubernetes.io/projected/5ca4c937-3ee5-4bca-b640-c19715cc2900-kube-api-access-psnhj\") pod \"nova-metadata-0\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.189703 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca4c937-3ee5-4bca-b640-c19715cc2900-config-data\") pod \"nova-metadata-0\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.189801 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca4c937-3ee5-4bca-b640-c19715cc2900-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.189874 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38add41c-ac98-4032-afdc-492adcadac0a-config-data\") pod \"nova-scheduler-0\" (UID: \"38add41c-ac98-4032-afdc-492adcadac0a\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.189939 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xndk\" (UniqueName: \"kubernetes.io/projected/38add41c-ac98-4032-afdc-492adcadac0a-kube-api-access-4xndk\") pod \"nova-scheduler-0\" (UID: \"38add41c-ac98-4032-afdc-492adcadac0a\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.190046 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38add41c-ac98-4032-afdc-492adcadac0a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"38add41c-ac98-4032-afdc-492adcadac0a\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.192040 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.196925 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.199093 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.205559 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.205889 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-t5fw8"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.230194 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.255879 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.256971 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.266446 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291280 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291467 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38add41c-ac98-4032-afdc-492adcadac0a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"38add41c-ac98-4032-afdc-492adcadac0a\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291504 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jdlv\" (UniqueName: \"kubernetes.io/projected/f145abf7-672b-48e2-80e6-52fdae845626-kube-api-access-5jdlv\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291537 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291557 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd7f8b4-9c58-457a-8742-56fa84945fc6-logs\") pod \"nova-api-0\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291595 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd7f8b4-9c58-457a-8742-56fa84945fc6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291626 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ca4c937-3ee5-4bca-b640-c19715cc2900-logs\") pod \"nova-metadata-0\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291647 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291670 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-dns-svc\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291697 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psnhj\" (UniqueName: \"kubernetes.io/projected/5ca4c937-3ee5-4bca-b640-c19715cc2900-kube-api-access-psnhj\") pod \"nova-metadata-0\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291713 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca4c937-3ee5-4bca-b640-c19715cc2900-config-data\") pod \"nova-metadata-0\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291766 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca4c937-3ee5-4bca-b640-c19715cc2900-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291792 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd7f8b4-9c58-457a-8742-56fa84945fc6-config-data\") pod \"nova-api-0\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291812 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvdtp\" (UniqueName: \"kubernetes.io/projected/6bd7f8b4-9c58-457a-8742-56fa84945fc6-kube-api-access-fvdtp\") pod \"nova-api-0\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291833 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xndk\" (UniqueName: \"kubernetes.io/projected/38add41c-ac98-4032-afdc-492adcadac0a-kube-api-access-4xndk\") pod \"nova-scheduler-0\" (UID: \"38add41c-ac98-4032-afdc-492adcadac0a\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291853 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38add41c-ac98-4032-afdc-492adcadac0a-config-data\") pod \"nova-scheduler-0\" (UID: \"38add41c-ac98-4032-afdc-492adcadac0a\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.291871 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-config\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.292616 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ca4c937-3ee5-4bca-b640-c19715cc2900-logs\") pod \"nova-metadata-0\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.296804 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38add41c-ac98-4032-afdc-492adcadac0a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"38add41c-ac98-4032-afdc-492adcadac0a\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.298140 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca4c937-3ee5-4bca-b640-c19715cc2900-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.299411 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca4c937-3ee5-4bca-b640-c19715cc2900-config-data\") pod \"nova-metadata-0\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.313731 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psnhj\" (UniqueName: \"kubernetes.io/projected/5ca4c937-3ee5-4bca-b640-c19715cc2900-kube-api-access-psnhj\") pod \"nova-metadata-0\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.321340 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38add41c-ac98-4032-afdc-492adcadac0a-config-data\") pod \"nova-scheduler-0\" (UID: \"38add41c-ac98-4032-afdc-492adcadac0a\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.321903 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xndk\" (UniqueName: \"kubernetes.io/projected/38add41c-ac98-4032-afdc-492adcadac0a-kube-api-access-4xndk\") pod \"nova-scheduler-0\" (UID: \"38add41c-ac98-4032-afdc-492adcadac0a\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.350345 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.395226 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtw7r\" (UniqueName: \"kubernetes.io/projected/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-kube-api-access-jtw7r\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.395281 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jdlv\" (UniqueName: \"kubernetes.io/projected/f145abf7-672b-48e2-80e6-52fdae845626-kube-api-access-5jdlv\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.395311 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.395334 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd7f8b4-9c58-457a-8742-56fa84945fc6-logs\") pod \"nova-api-0\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.395352 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.395370 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.395405 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd7f8b4-9c58-457a-8742-56fa84945fc6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.395436 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.395459 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-dns-svc\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.395516 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd7f8b4-9c58-457a-8742-56fa84945fc6-config-data\") pod \"nova-api-0\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.395533 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvdtp\" (UniqueName: \"kubernetes.io/projected/6bd7f8b4-9c58-457a-8742-56fa84945fc6-kube-api-access-fvdtp\") pod \"nova-api-0\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.395557 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-config\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.396944 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.397226 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd7f8b4-9c58-457a-8742-56fa84945fc6-logs\") pod \"nova-api-0\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.398195 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-dns-svc\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.398583 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-config\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.398806 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.401790 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd7f8b4-9c58-457a-8742-56fa84945fc6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.413492 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd7f8b4-9c58-457a-8742-56fa84945fc6-config-data\") pod \"nova-api-0\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.420930 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvdtp\" (UniqueName: \"kubernetes.io/projected/6bd7f8b4-9c58-457a-8742-56fa84945fc6-kube-api-access-fvdtp\") pod \"nova-api-0\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.420977 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jdlv\" (UniqueName: \"kubernetes.io/projected/f145abf7-672b-48e2-80e6-52fdae845626-kube-api-access-5jdlv\") pod \"dnsmasq-dns-566b5b7845-t5fw8\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.485680 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.496996 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtw7r\" (UniqueName: \"kubernetes.io/projected/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-kube-api-access-jtw7r\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.497063 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.497080 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.501017 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.504723 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.521154 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtw7r\" (UniqueName: \"kubernetes.io/projected/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-kube-api-access-jtw7r\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.668518 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.683939 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.711158 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.797039 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-75rfw"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.910183 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.920011 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vthl5"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.923242 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.925568 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.926525 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.936093 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vthl5"] Jan 31 05:43:01 crc kubenswrapper[5050]: I0131 05:43:01.965060 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.008309 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-config-data\") pod \"nova-cell1-conductor-db-sync-vthl5\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.008354 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-scripts\") pod \"nova-cell1-conductor-db-sync-vthl5\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.008373 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vthl5\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.008473 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlvn\" (UniqueName: \"kubernetes.io/projected/cdc6156e-bdae-4cf2-a051-9c884bd592ca-kube-api-access-qmlvn\") pod \"nova-cell1-conductor-db-sync-vthl5\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.029172 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-75rfw" event={"ID":"18c98739-a178-40c1-94b1-a60d20b26f6e","Type":"ContainerStarted","Data":"5cbc06671b74733ef7a42e01de9dd9e35080c56aecac86fe7864f1ae5d931790"} Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.029432 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-75rfw" event={"ID":"18c98739-a178-40c1-94b1-a60d20b26f6e","Type":"ContainerStarted","Data":"c5aadeb62d3aef82d94298e74880517101b2bec9a39508ddafd3df4f3b9ec253"} Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.030653 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ca4c937-3ee5-4bca-b640-c19715cc2900","Type":"ContainerStarted","Data":"2eccc188b053c6d53ebb9ffbb6c0c8f30be5a1e455e81605b16c36bd153cbdf1"} Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.032174 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"38add41c-ac98-4032-afdc-492adcadac0a","Type":"ContainerStarted","Data":"89a4f7ce787348ca0e79139ff8f8f1ea470da763945c6caa0f31db1e6c20718e"} Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.047527 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-75rfw" podStartSLOduration=2.047510554 podStartE2EDuration="2.047510554s" podCreationTimestamp="2026-01-31 05:43:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:43:02.045686335 +0000 UTC m=+1307.094847921" watchObservedRunningTime="2026-01-31 05:43:02.047510554 +0000 UTC m=+1307.096672150" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.110063 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmlvn\" (UniqueName: \"kubernetes.io/projected/cdc6156e-bdae-4cf2-a051-9c884bd592ca-kube-api-access-qmlvn\") pod \"nova-cell1-conductor-db-sync-vthl5\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.110159 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-config-data\") pod \"nova-cell1-conductor-db-sync-vthl5\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.110186 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-scripts\") pod \"nova-cell1-conductor-db-sync-vthl5\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.110205 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vthl5\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.115119 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vthl5\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.115356 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-config-data\") pod \"nova-cell1-conductor-db-sync-vthl5\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.117702 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-scripts\") pod \"nova-cell1-conductor-db-sync-vthl5\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.135785 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmlvn\" (UniqueName: \"kubernetes.io/projected/cdc6156e-bdae-4cf2-a051-9c884bd592ca-kube-api-access-qmlvn\") pod \"nova-cell1-conductor-db-sync-vthl5\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.192346 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-t5fw8"] Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.250559 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.250981 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:02 crc kubenswrapper[5050]: W0131 05:43:02.263135 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8a3ee49_050c_40f4_92fe_38dd438ee2ca.slice/crio-0f6e19ea98b8eaa7db453b33596a64e2f81d37e76020176388c9073ebd45f5d7 WatchSource:0}: Error finding container 0f6e19ea98b8eaa7db453b33596a64e2f81d37e76020176388c9073ebd45f5d7: Status 404 returned error can't find the container with id 0f6e19ea98b8eaa7db453b33596a64e2f81d37e76020176388c9073ebd45f5d7 Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.408838 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:02 crc kubenswrapper[5050]: W0131 05:43:02.415196 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bd7f8b4_9c58_457a_8742_56fa84945fc6.slice/crio-77303cc397041bf83e4cfb1118946bf7680f60443942c46dd025dc1eeb5df8c1 WatchSource:0}: Error finding container 77303cc397041bf83e4cfb1118946bf7680f60443942c46dd025dc1eeb5df8c1: Status 404 returned error can't find the container with id 77303cc397041bf83e4cfb1118946bf7680f60443942c46dd025dc1eeb5df8c1 Jan 31 05:43:02 crc kubenswrapper[5050]: I0131 05:43:02.733263 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vthl5"] Jan 31 05:43:03 crc kubenswrapper[5050]: I0131 05:43:03.040660 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6bd7f8b4-9c58-457a-8742-56fa84945fc6","Type":"ContainerStarted","Data":"77303cc397041bf83e4cfb1118946bf7680f60443942c46dd025dc1eeb5df8c1"} Jan 31 05:43:03 crc kubenswrapper[5050]: I0131 05:43:03.042228 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b8a3ee49-050c-40f4-92fe-38dd438ee2ca","Type":"ContainerStarted","Data":"0f6e19ea98b8eaa7db453b33596a64e2f81d37e76020176388c9073ebd45f5d7"} Jan 31 05:43:03 crc kubenswrapper[5050]: I0131 05:43:03.044049 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vthl5" event={"ID":"cdc6156e-bdae-4cf2-a051-9c884bd592ca","Type":"ContainerStarted","Data":"49214b9ef6f69861069ab4a0a5079412baa8c62594fb4894dc80cdd5f68ec5c2"} Jan 31 05:43:03 crc kubenswrapper[5050]: I0131 05:43:03.044091 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vthl5" event={"ID":"cdc6156e-bdae-4cf2-a051-9c884bd592ca","Type":"ContainerStarted","Data":"298c475ccb4aba1cafd20e0e14639d78c09718235538ed7d9d31e3c890b0f107"} Jan 31 05:43:03 crc kubenswrapper[5050]: I0131 05:43:03.046053 5050 generic.go:334] "Generic (PLEG): container finished" podID="f145abf7-672b-48e2-80e6-52fdae845626" containerID="a8876be31c60ab68fa01518923f17ac17e7d6301224292a6244c608d88d7a6d0" exitCode=0 Jan 31 05:43:03 crc kubenswrapper[5050]: I0131 05:43:03.046113 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" event={"ID":"f145abf7-672b-48e2-80e6-52fdae845626","Type":"ContainerDied","Data":"a8876be31c60ab68fa01518923f17ac17e7d6301224292a6244c608d88d7a6d0"} Jan 31 05:43:03 crc kubenswrapper[5050]: I0131 05:43:03.046138 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" event={"ID":"f145abf7-672b-48e2-80e6-52fdae845626","Type":"ContainerStarted","Data":"fadfa896760b65aecab82e943c2ef53fc200bc4b3f788731e60b80afecf6f9ed"} Jan 31 05:43:03 crc kubenswrapper[5050]: I0131 05:43:03.138812 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-vthl5" podStartSLOduration=2.138793556 podStartE2EDuration="2.138793556s" podCreationTimestamp="2026-01-31 05:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:43:03.080990494 +0000 UTC m=+1308.130152090" watchObservedRunningTime="2026-01-31 05:43:03.138793556 +0000 UTC m=+1308.187955152" Jan 31 05:43:04 crc kubenswrapper[5050]: I0131 05:43:04.902280 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:04 crc kubenswrapper[5050]: I0131 05:43:04.918454 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.073356 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b8a3ee49-050c-40f4-92fe-38dd438ee2ca","Type":"ContainerStarted","Data":"e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089"} Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.073437 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b8a3ee49-050c-40f4-92fe-38dd438ee2ca" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089" gracePeriod=30 Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.077231 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ca4c937-3ee5-4bca-b640-c19715cc2900","Type":"ContainerStarted","Data":"9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e"} Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.077272 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ca4c937-3ee5-4bca-b640-c19715cc2900","Type":"ContainerStarted","Data":"3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926"} Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.077388 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5ca4c937-3ee5-4bca-b640-c19715cc2900" containerName="nova-metadata-log" containerID="cri-o://3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926" gracePeriod=30 Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.077672 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5ca4c937-3ee5-4bca-b640-c19715cc2900" containerName="nova-metadata-metadata" containerID="cri-o://9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e" gracePeriod=30 Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.089827 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.205436881 podStartE2EDuration="5.089812339s" podCreationTimestamp="2026-01-31 05:43:01 +0000 UTC" firstStartedPulling="2026-01-31 05:43:02.28219988 +0000 UTC m=+1307.331361476" lastFinishedPulling="2026-01-31 05:43:05.166575338 +0000 UTC m=+1310.215736934" observedRunningTime="2026-01-31 05:43:06.086383475 +0000 UTC m=+1311.135545091" watchObservedRunningTime="2026-01-31 05:43:06.089812339 +0000 UTC m=+1311.138973935" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.094288 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" event={"ID":"f145abf7-672b-48e2-80e6-52fdae845626","Type":"ContainerStarted","Data":"343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20"} Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.094814 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.096859 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"38add41c-ac98-4032-afdc-492adcadac0a","Type":"ContainerStarted","Data":"6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836"} Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.099519 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6bd7f8b4-9c58-457a-8742-56fa84945fc6","Type":"ContainerStarted","Data":"9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8"} Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.099546 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6bd7f8b4-9c58-457a-8742-56fa84945fc6","Type":"ContainerStarted","Data":"ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733"} Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.106529 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.8916307209999998 podStartE2EDuration="5.106511212s" podCreationTimestamp="2026-01-31 05:43:01 +0000 UTC" firstStartedPulling="2026-01-31 05:43:01.975866865 +0000 UTC m=+1307.025028461" lastFinishedPulling="2026-01-31 05:43:05.190747346 +0000 UTC m=+1310.239908952" observedRunningTime="2026-01-31 05:43:06.101234059 +0000 UTC m=+1311.150395655" watchObservedRunningTime="2026-01-31 05:43:06.106511212 +0000 UTC m=+1311.155672808" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.126519 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.894046387 podStartE2EDuration="5.126503737s" podCreationTimestamp="2026-01-31 05:43:01 +0000 UTC" firstStartedPulling="2026-01-31 05:43:01.931085106 +0000 UTC m=+1306.980246702" lastFinishedPulling="2026-01-31 05:43:05.163542456 +0000 UTC m=+1310.212704052" observedRunningTime="2026-01-31 05:43:06.119611059 +0000 UTC m=+1311.168772655" watchObservedRunningTime="2026-01-31 05:43:06.126503737 +0000 UTC m=+1311.175665333" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.140583 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" podStartSLOduration=5.140568049 podStartE2EDuration="5.140568049s" podCreationTimestamp="2026-01-31 05:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:43:06.135946263 +0000 UTC m=+1311.185107859" watchObservedRunningTime="2026-01-31 05:43:06.140568049 +0000 UTC m=+1311.189729645" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.159611 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.423668437 podStartE2EDuration="5.159593967s" podCreationTimestamp="2026-01-31 05:43:01 +0000 UTC" firstStartedPulling="2026-01-31 05:43:02.429280281 +0000 UTC m=+1307.478441877" lastFinishedPulling="2026-01-31 05:43:05.165205821 +0000 UTC m=+1310.214367407" observedRunningTime="2026-01-31 05:43:06.158860247 +0000 UTC m=+1311.208021843" watchObservedRunningTime="2026-01-31 05:43:06.159593967 +0000 UTC m=+1311.208755563" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.351719 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.486117 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.487305 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.663137 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.711467 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.754220 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca4c937-3ee5-4bca-b640-c19715cc2900-config-data\") pod \"5ca4c937-3ee5-4bca-b640-c19715cc2900\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.754468 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ca4c937-3ee5-4bca-b640-c19715cc2900-logs\") pod \"5ca4c937-3ee5-4bca-b640-c19715cc2900\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.754571 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca4c937-3ee5-4bca-b640-c19715cc2900-combined-ca-bundle\") pod \"5ca4c937-3ee5-4bca-b640-c19715cc2900\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.754660 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psnhj\" (UniqueName: \"kubernetes.io/projected/5ca4c937-3ee5-4bca-b640-c19715cc2900-kube-api-access-psnhj\") pod \"5ca4c937-3ee5-4bca-b640-c19715cc2900\" (UID: \"5ca4c937-3ee5-4bca-b640-c19715cc2900\") " Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.754809 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ca4c937-3ee5-4bca-b640-c19715cc2900-logs" (OuterVolumeSpecName: "logs") pod "5ca4c937-3ee5-4bca-b640-c19715cc2900" (UID: "5ca4c937-3ee5-4bca-b640-c19715cc2900"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.755285 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ca4c937-3ee5-4bca-b640-c19715cc2900-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.761130 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca4c937-3ee5-4bca-b640-c19715cc2900-kube-api-access-psnhj" (OuterVolumeSpecName: "kube-api-access-psnhj") pod "5ca4c937-3ee5-4bca-b640-c19715cc2900" (UID: "5ca4c937-3ee5-4bca-b640-c19715cc2900"). InnerVolumeSpecName "kube-api-access-psnhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.785375 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca4c937-3ee5-4bca-b640-c19715cc2900-config-data" (OuterVolumeSpecName: "config-data") pod "5ca4c937-3ee5-4bca-b640-c19715cc2900" (UID: "5ca4c937-3ee5-4bca-b640-c19715cc2900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.789575 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca4c937-3ee5-4bca-b640-c19715cc2900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ca4c937-3ee5-4bca-b640-c19715cc2900" (UID: "5ca4c937-3ee5-4bca-b640-c19715cc2900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.857973 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca4c937-3ee5-4bca-b640-c19715cc2900-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.858007 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca4c937-3ee5-4bca-b640-c19715cc2900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:06 crc kubenswrapper[5050]: I0131 05:43:06.858022 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psnhj\" (UniqueName: \"kubernetes.io/projected/5ca4c937-3ee5-4bca-b640-c19715cc2900-kube-api-access-psnhj\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.112013 5050 generic.go:334] "Generic (PLEG): container finished" podID="5ca4c937-3ee5-4bca-b640-c19715cc2900" containerID="9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e" exitCode=0 Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.112335 5050 generic.go:334] "Generic (PLEG): container finished" podID="5ca4c937-3ee5-4bca-b640-c19715cc2900" containerID="3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926" exitCode=143 Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.112244 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.112156 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ca4c937-3ee5-4bca-b640-c19715cc2900","Type":"ContainerDied","Data":"9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e"} Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.112500 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ca4c937-3ee5-4bca-b640-c19715cc2900","Type":"ContainerDied","Data":"3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926"} Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.112520 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ca4c937-3ee5-4bca-b640-c19715cc2900","Type":"ContainerDied","Data":"2eccc188b053c6d53ebb9ffbb6c0c8f30be5a1e455e81605b16c36bd153cbdf1"} Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.112542 5050 scope.go:117] "RemoveContainer" containerID="9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.141272 5050 scope.go:117] "RemoveContainer" containerID="3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.167191 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.201009 5050 scope.go:117] "RemoveContainer" containerID="9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e" Jan 31 05:43:07 crc kubenswrapper[5050]: E0131 05:43:07.202634 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e\": container with ID starting with 9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e not found: ID does not exist" containerID="9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.202669 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e"} err="failed to get container status \"9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e\": rpc error: code = NotFound desc = could not find container \"9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e\": container with ID starting with 9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e not found: ID does not exist" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.202696 5050 scope.go:117] "RemoveContainer" containerID="3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926" Jan 31 05:43:07 crc kubenswrapper[5050]: E0131 05:43:07.203119 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926\": container with ID starting with 3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926 not found: ID does not exist" containerID="3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.203142 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926"} err="failed to get container status \"3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926\": rpc error: code = NotFound desc = could not find container \"3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926\": container with ID starting with 3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926 not found: ID does not exist" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.203159 5050 scope.go:117] "RemoveContainer" containerID="9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.203904 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e"} err="failed to get container status \"9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e\": rpc error: code = NotFound desc = could not find container \"9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e\": container with ID starting with 9684ac56835372e98722be3220a02aefb79b2efeaada3e5f8f18f468c53bec2e not found: ID does not exist" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.204089 5050 scope.go:117] "RemoveContainer" containerID="3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.205128 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926"} err="failed to get container status \"3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926\": rpc error: code = NotFound desc = could not find container \"3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926\": container with ID starting with 3c139ab1da227d2449b0b6dead1c0075e9b782065fe928c5929426deb6d6a926 not found: ID does not exist" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.209730 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.220933 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:07 crc kubenswrapper[5050]: E0131 05:43:07.221530 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca4c937-3ee5-4bca-b640-c19715cc2900" containerName="nova-metadata-metadata" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.221549 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca4c937-3ee5-4bca-b640-c19715cc2900" containerName="nova-metadata-metadata" Jan 31 05:43:07 crc kubenswrapper[5050]: E0131 05:43:07.221577 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca4c937-3ee5-4bca-b640-c19715cc2900" containerName="nova-metadata-log" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.221585 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca4c937-3ee5-4bca-b640-c19715cc2900" containerName="nova-metadata-log" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.221815 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca4c937-3ee5-4bca-b640-c19715cc2900" containerName="nova-metadata-log" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.221829 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca4c937-3ee5-4bca-b640-c19715cc2900" containerName="nova-metadata-metadata" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.223041 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.227399 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.227640 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.235275 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.369915 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.370160 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6c440d3-994c-4739-a323-19201230b03a-logs\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.370204 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.370277 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-config-data\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.370334 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhff\" (UniqueName: \"kubernetes.io/projected/f6c440d3-994c-4739-a323-19201230b03a-kube-api-access-qrhff\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.473054 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.473158 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6c440d3-994c-4739-a323-19201230b03a-logs\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.473225 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.473370 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-config-data\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.473483 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhff\" (UniqueName: \"kubernetes.io/projected/f6c440d3-994c-4739-a323-19201230b03a-kube-api-access-qrhff\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.473544 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6c440d3-994c-4739-a323-19201230b03a-logs\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.479642 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.480556 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-config-data\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.480985 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.508099 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhff\" (UniqueName: \"kubernetes.io/projected/f6c440d3-994c-4739-a323-19201230b03a-kube-api-access-qrhff\") pod \"nova-metadata-0\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.546318 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:43:07 crc kubenswrapper[5050]: I0131 05:43:07.752748 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca4c937-3ee5-4bca-b640-c19715cc2900" path="/var/lib/kubelet/pods/5ca4c937-3ee5-4bca-b640-c19715cc2900/volumes" Jan 31 05:43:08 crc kubenswrapper[5050]: I0131 05:43:08.033087 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:08 crc kubenswrapper[5050]: I0131 05:43:08.122976 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6c440d3-994c-4739-a323-19201230b03a","Type":"ContainerStarted","Data":"7003ae765f892a5f151884d24e06a2bff6a3d506183331658a503aee198f8511"} Jan 31 05:43:09 crc kubenswrapper[5050]: I0131 05:43:09.137806 5050 generic.go:334] "Generic (PLEG): container finished" podID="18c98739-a178-40c1-94b1-a60d20b26f6e" containerID="5cbc06671b74733ef7a42e01de9dd9e35080c56aecac86fe7864f1ae5d931790" exitCode=0 Jan 31 05:43:09 crc kubenswrapper[5050]: I0131 05:43:09.137943 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-75rfw" event={"ID":"18c98739-a178-40c1-94b1-a60d20b26f6e","Type":"ContainerDied","Data":"5cbc06671b74733ef7a42e01de9dd9e35080c56aecac86fe7864f1ae5d931790"} Jan 31 05:43:09 crc kubenswrapper[5050]: I0131 05:43:09.141405 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6c440d3-994c-4739-a323-19201230b03a","Type":"ContainerStarted","Data":"51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f"} Jan 31 05:43:09 crc kubenswrapper[5050]: I0131 05:43:09.141473 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6c440d3-994c-4739-a323-19201230b03a","Type":"ContainerStarted","Data":"8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7"} Jan 31 05:43:09 crc kubenswrapper[5050]: I0131 05:43:09.179246 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.179217245 podStartE2EDuration="2.179217245s" podCreationTimestamp="2026-01-31 05:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:43:09.172517583 +0000 UTC m=+1314.221679199" watchObservedRunningTime="2026-01-31 05:43:09.179217245 +0000 UTC m=+1314.228378831" Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.621915 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.744039 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-combined-ca-bundle\") pod \"18c98739-a178-40c1-94b1-a60d20b26f6e\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.744484 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-scripts\") pod \"18c98739-a178-40c1-94b1-a60d20b26f6e\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.744690 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kk2l\" (UniqueName: \"kubernetes.io/projected/18c98739-a178-40c1-94b1-a60d20b26f6e-kube-api-access-7kk2l\") pod \"18c98739-a178-40c1-94b1-a60d20b26f6e\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.745641 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-config-data\") pod \"18c98739-a178-40c1-94b1-a60d20b26f6e\" (UID: \"18c98739-a178-40c1-94b1-a60d20b26f6e\") " Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.751579 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-scripts" (OuterVolumeSpecName: "scripts") pod "18c98739-a178-40c1-94b1-a60d20b26f6e" (UID: "18c98739-a178-40c1-94b1-a60d20b26f6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.752566 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c98739-a178-40c1-94b1-a60d20b26f6e-kube-api-access-7kk2l" (OuterVolumeSpecName: "kube-api-access-7kk2l") pod "18c98739-a178-40c1-94b1-a60d20b26f6e" (UID: "18c98739-a178-40c1-94b1-a60d20b26f6e"). InnerVolumeSpecName "kube-api-access-7kk2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.775022 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18c98739-a178-40c1-94b1-a60d20b26f6e" (UID: "18c98739-a178-40c1-94b1-a60d20b26f6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.785995 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-config-data" (OuterVolumeSpecName: "config-data") pod "18c98739-a178-40c1-94b1-a60d20b26f6e" (UID: "18c98739-a178-40c1-94b1-a60d20b26f6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.848211 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.848258 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.848278 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kk2l\" (UniqueName: \"kubernetes.io/projected/18c98739-a178-40c1-94b1-a60d20b26f6e-kube-api-access-7kk2l\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:10 crc kubenswrapper[5050]: I0131 05:43:10.848300 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c98739-a178-40c1-94b1-a60d20b26f6e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.182235 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-75rfw" event={"ID":"18c98739-a178-40c1-94b1-a60d20b26f6e","Type":"ContainerDied","Data":"c5aadeb62d3aef82d94298e74880517101b2bec9a39508ddafd3df4f3b9ec253"} Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.182339 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5aadeb62d3aef82d94298e74880517101b2bec9a39508ddafd3df4f3b9ec253" Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.182720 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-75rfw" Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.207281 5050 generic.go:334] "Generic (PLEG): container finished" podID="cdc6156e-bdae-4cf2-a051-9c884bd592ca" containerID="49214b9ef6f69861069ab4a0a5079412baa8c62594fb4894dc80cdd5f68ec5c2" exitCode=0 Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.207336 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vthl5" event={"ID":"cdc6156e-bdae-4cf2-a051-9c884bd592ca","Type":"ContainerDied","Data":"49214b9ef6f69861069ab4a0a5079412baa8c62594fb4894dc80cdd5f68ec5c2"} Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.351249 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.352523 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.352786 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6bd7f8b4-9c58-457a-8742-56fa84945fc6" containerName="nova-api-log" containerID="cri-o://ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733" gracePeriod=30 Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.352891 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6bd7f8b4-9c58-457a-8742-56fa84945fc6" containerName="nova-api-api" containerID="cri-o://9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8" gracePeriod=30 Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.379425 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.390841 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.391080 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f6c440d3-994c-4739-a323-19201230b03a" containerName="nova-metadata-log" containerID="cri-o://8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7" gracePeriod=30 Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.391252 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f6c440d3-994c-4739-a323-19201230b03a" containerName="nova-metadata-metadata" containerID="cri-o://51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f" gracePeriod=30 Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.402661 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.670061 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.728915 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-mqhmf"] Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.729888 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" podUID="ac560e57-d991-4e2f-826b-136d7c6dc075" containerName="dnsmasq-dns" containerID="cri-o://f59c109c088d566c5d6d1cda5458240500551948426de5c20e5d4bdd962cfcf3" gracePeriod=10 Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.855257 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" podUID="ac560e57-d991-4e2f-826b-136d7c6dc075" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: connect: connection refused" Jan 31 05:43:11 crc kubenswrapper[5050]: I0131 05:43:11.932489 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.004021 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.072426 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd7f8b4-9c58-457a-8742-56fa84945fc6-combined-ca-bundle\") pod \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.072489 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd7f8b4-9c58-457a-8742-56fa84945fc6-config-data\") pod \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.072530 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6c440d3-994c-4739-a323-19201230b03a-logs\") pod \"f6c440d3-994c-4739-a323-19201230b03a\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.072596 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-combined-ca-bundle\") pod \"f6c440d3-994c-4739-a323-19201230b03a\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.072614 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd7f8b4-9c58-457a-8742-56fa84945fc6-logs\") pod \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.072636 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-nova-metadata-tls-certs\") pod \"f6c440d3-994c-4739-a323-19201230b03a\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.072683 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-config-data\") pod \"f6c440d3-994c-4739-a323-19201230b03a\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.072747 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrhff\" (UniqueName: \"kubernetes.io/projected/f6c440d3-994c-4739-a323-19201230b03a-kube-api-access-qrhff\") pod \"f6c440d3-994c-4739-a323-19201230b03a\" (UID: \"f6c440d3-994c-4739-a323-19201230b03a\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.072780 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvdtp\" (UniqueName: \"kubernetes.io/projected/6bd7f8b4-9c58-457a-8742-56fa84945fc6-kube-api-access-fvdtp\") pod \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\" (UID: \"6bd7f8b4-9c58-457a-8742-56fa84945fc6\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.073353 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c440d3-994c-4739-a323-19201230b03a-logs" (OuterVolumeSpecName: "logs") pod "f6c440d3-994c-4739-a323-19201230b03a" (UID: "f6c440d3-994c-4739-a323-19201230b03a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.073390 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd7f8b4-9c58-457a-8742-56fa84945fc6-logs" (OuterVolumeSpecName: "logs") pod "6bd7f8b4-9c58-457a-8742-56fa84945fc6" (UID: "6bd7f8b4-9c58-457a-8742-56fa84945fc6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.090441 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c440d3-994c-4739-a323-19201230b03a-kube-api-access-qrhff" (OuterVolumeSpecName: "kube-api-access-qrhff") pod "f6c440d3-994c-4739-a323-19201230b03a" (UID: "f6c440d3-994c-4739-a323-19201230b03a"). InnerVolumeSpecName "kube-api-access-qrhff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.112163 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd7f8b4-9c58-457a-8742-56fa84945fc6-kube-api-access-fvdtp" (OuterVolumeSpecName: "kube-api-access-fvdtp") pod "6bd7f8b4-9c58-457a-8742-56fa84945fc6" (UID: "6bd7f8b4-9c58-457a-8742-56fa84945fc6"). InnerVolumeSpecName "kube-api-access-fvdtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.125189 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-config-data" (OuterVolumeSpecName: "config-data") pod "f6c440d3-994c-4739-a323-19201230b03a" (UID: "f6c440d3-994c-4739-a323-19201230b03a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.143849 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6c440d3-994c-4739-a323-19201230b03a" (UID: "f6c440d3-994c-4739-a323-19201230b03a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.144767 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd7f8b4-9c58-457a-8742-56fa84945fc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bd7f8b4-9c58-457a-8742-56fa84945fc6" (UID: "6bd7f8b4-9c58-457a-8742-56fa84945fc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.144853 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd7f8b4-9c58-457a-8742-56fa84945fc6-config-data" (OuterVolumeSpecName: "config-data") pod "6bd7f8b4-9c58-457a-8742-56fa84945fc6" (UID: "6bd7f8b4-9c58-457a-8742-56fa84945fc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.175179 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvdtp\" (UniqueName: \"kubernetes.io/projected/6bd7f8b4-9c58-457a-8742-56fa84945fc6-kube-api-access-fvdtp\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.175209 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd7f8b4-9c58-457a-8742-56fa84945fc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.175219 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd7f8b4-9c58-457a-8742-56fa84945fc6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.175228 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6c440d3-994c-4739-a323-19201230b03a-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.175236 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.175243 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd7f8b4-9c58-457a-8742-56fa84945fc6-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.175251 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.175259 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrhff\" (UniqueName: \"kubernetes.io/projected/f6c440d3-994c-4739-a323-19201230b03a-kube-api-access-qrhff\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.192060 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f6c440d3-994c-4739-a323-19201230b03a" (UID: "f6c440d3-994c-4739-a323-19201230b03a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.220119 5050 generic.go:334] "Generic (PLEG): container finished" podID="f6c440d3-994c-4739-a323-19201230b03a" containerID="51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f" exitCode=0 Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.220368 5050 generic.go:334] "Generic (PLEG): container finished" podID="f6c440d3-994c-4739-a323-19201230b03a" containerID="8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7" exitCode=143 Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.220541 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6c440d3-994c-4739-a323-19201230b03a","Type":"ContainerDied","Data":"51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f"} Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.220634 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6c440d3-994c-4739-a323-19201230b03a","Type":"ContainerDied","Data":"8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7"} Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.220709 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6c440d3-994c-4739-a323-19201230b03a","Type":"ContainerDied","Data":"7003ae765f892a5f151884d24e06a2bff6a3d506183331658a503aee198f8511"} Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.220796 5050 scope.go:117] "RemoveContainer" containerID="51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.221001 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.227211 5050 generic.go:334] "Generic (PLEG): container finished" podID="ac560e57-d991-4e2f-826b-136d7c6dc075" containerID="f59c109c088d566c5d6d1cda5458240500551948426de5c20e5d4bdd962cfcf3" exitCode=0 Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.227271 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" event={"ID":"ac560e57-d991-4e2f-826b-136d7c6dc075","Type":"ContainerDied","Data":"f59c109c088d566c5d6d1cda5458240500551948426de5c20e5d4bdd962cfcf3"} Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.231286 5050 generic.go:334] "Generic (PLEG): container finished" podID="6bd7f8b4-9c58-457a-8742-56fa84945fc6" containerID="9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8" exitCode=0 Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.231313 5050 generic.go:334] "Generic (PLEG): container finished" podID="6bd7f8b4-9c58-457a-8742-56fa84945fc6" containerID="ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733" exitCode=143 Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.231451 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6bd7f8b4-9c58-457a-8742-56fa84945fc6","Type":"ContainerDied","Data":"9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8"} Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.231482 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6bd7f8b4-9c58-457a-8742-56fa84945fc6","Type":"ContainerDied","Data":"ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733"} Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.231493 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6bd7f8b4-9c58-457a-8742-56fa84945fc6","Type":"ContainerDied","Data":"77303cc397041bf83e4cfb1118946bf7680f60443942c46dd025dc1eeb5df8c1"} Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.231458 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.231728 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="38add41c-ac98-4032-afdc-492adcadac0a" containerName="nova-scheduler-scheduler" containerID="cri-o://6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836" gracePeriod=30 Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.242212 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.244263 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.253210 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.253282 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="38add41c-ac98-4032-afdc-492adcadac0a" containerName="nova-scheduler-scheduler" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.275042 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.277327 5050 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6c440d3-994c-4739-a323-19201230b03a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.292877 5050 scope.go:117] "RemoveContainer" containerID="8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.309320 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.322667 5050 scope.go:117] "RemoveContainer" containerID="51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f" Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.323139 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f\": container with ID starting with 51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f not found: ID does not exist" containerID="51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.323164 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f"} err="failed to get container status \"51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f\": rpc error: code = NotFound desc = could not find container \"51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f\": container with ID starting with 51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f not found: ID does not exist" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.323183 5050 scope.go:117] "RemoveContainer" containerID="8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7" Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.328131 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7\": container with ID starting with 8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7 not found: ID does not exist" containerID="8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.328171 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7"} err="failed to get container status \"8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7\": rpc error: code = NotFound desc = could not find container \"8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7\": container with ID starting with 8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7 not found: ID does not exist" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.328194 5050 scope.go:117] "RemoveContainer" containerID="51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.333356 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f"} err="failed to get container status \"51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f\": rpc error: code = NotFound desc = could not find container \"51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f\": container with ID starting with 51fdcfb2a91f19d04de4b021c585eee1c535efa25119dbda986399118cde027f not found: ID does not exist" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.333394 5050 scope.go:117] "RemoveContainer" containerID="8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.333435 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.333991 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7"} err="failed to get container status \"8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7\": rpc error: code = NotFound desc = could not find container \"8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7\": container with ID starting with 8cf17f7ab5e6b3a2707cf56226c88e485168e9b3a75610d7a977997e8fe387d7 not found: ID does not exist" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.334017 5050 scope.go:117] "RemoveContainer" containerID="9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.334451 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.341727 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.348718 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.349131 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c440d3-994c-4739-a323-19201230b03a" containerName="nova-metadata-log" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349144 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c440d3-994c-4739-a323-19201230b03a" containerName="nova-metadata-log" Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.349164 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd7f8b4-9c58-457a-8742-56fa84945fc6" containerName="nova-api-api" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349170 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd7f8b4-9c58-457a-8742-56fa84945fc6" containerName="nova-api-api" Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.349185 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd7f8b4-9c58-457a-8742-56fa84945fc6" containerName="nova-api-log" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349190 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd7f8b4-9c58-457a-8742-56fa84945fc6" containerName="nova-api-log" Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.349200 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac560e57-d991-4e2f-826b-136d7c6dc075" containerName="init" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349205 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac560e57-d991-4e2f-826b-136d7c6dc075" containerName="init" Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.349221 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c98739-a178-40c1-94b1-a60d20b26f6e" containerName="nova-manage" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349226 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c98739-a178-40c1-94b1-a60d20b26f6e" containerName="nova-manage" Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.349233 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac560e57-d991-4e2f-826b-136d7c6dc075" containerName="dnsmasq-dns" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349238 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac560e57-d991-4e2f-826b-136d7c6dc075" containerName="dnsmasq-dns" Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.349250 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c440d3-994c-4739-a323-19201230b03a" containerName="nova-metadata-metadata" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349257 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c440d3-994c-4739-a323-19201230b03a" containerName="nova-metadata-metadata" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349449 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c440d3-994c-4739-a323-19201230b03a" containerName="nova-metadata-metadata" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349493 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c98739-a178-40c1-94b1-a60d20b26f6e" containerName="nova-manage" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349507 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c440d3-994c-4739-a323-19201230b03a" containerName="nova-metadata-log" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349516 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd7f8b4-9c58-457a-8742-56fa84945fc6" containerName="nova-api-log" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349524 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac560e57-d991-4e2f-826b-136d7c6dc075" containerName="dnsmasq-dns" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.349531 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd7f8b4-9c58-457a-8742-56fa84945fc6" containerName="nova-api-api" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.350420 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.354433 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.361022 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.364881 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.366591 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.367997 5050 scope.go:117] "RemoveContainer" containerID="ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.370071 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.378215 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.397408 5050 scope.go:117] "RemoveContainer" containerID="9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8" Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.397706 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8\": container with ID starting with 9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8 not found: ID does not exist" containerID="9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.397733 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8"} err="failed to get container status \"9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8\": rpc error: code = NotFound desc = could not find container \"9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8\": container with ID starting with 9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8 not found: ID does not exist" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.397752 5050 scope.go:117] "RemoveContainer" containerID="ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733" Jan 31 05:43:12 crc kubenswrapper[5050]: E0131 05:43:12.397913 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733\": container with ID starting with ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733 not found: ID does not exist" containerID="ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.397942 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733"} err="failed to get container status \"ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733\": rpc error: code = NotFound desc = could not find container \"ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733\": container with ID starting with ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733 not found: ID does not exist" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.397990 5050 scope.go:117] "RemoveContainer" containerID="9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.399879 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8"} err="failed to get container status \"9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8\": rpc error: code = NotFound desc = could not find container \"9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8\": container with ID starting with 9e1321d4c582cc207cd1fcc98ff0068f323cba887778ec90dd73b8986eadedc8 not found: ID does not exist" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.399925 5050 scope.go:117] "RemoveContainer" containerID="ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.400516 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733"} err="failed to get container status \"ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733\": rpc error: code = NotFound desc = could not find container \"ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733\": container with ID starting with ef3264df5624c2ffd4590989aaa08160c642dde05e05ee7084d180f92e083733 not found: ID does not exist" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.408224 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.479811 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwjcs\" (UniqueName: \"kubernetes.io/projected/ac560e57-d991-4e2f-826b-136d7c6dc075-kube-api-access-fwjcs\") pod \"ac560e57-d991-4e2f-826b-136d7c6dc075\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.479893 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-ovsdbserver-sb\") pod \"ac560e57-d991-4e2f-826b-136d7c6dc075\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.480070 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-dns-svc\") pod \"ac560e57-d991-4e2f-826b-136d7c6dc075\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.480109 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-ovsdbserver-nb\") pod \"ac560e57-d991-4e2f-826b-136d7c6dc075\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.480325 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-config\") pod \"ac560e57-d991-4e2f-826b-136d7c6dc075\" (UID: \"ac560e57-d991-4e2f-826b-136d7c6dc075\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.480628 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.480687 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-logs\") pod \"nova-api-0\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.480761 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-config-data\") pod \"nova-api-0\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.480816 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.480878 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86nph\" (UniqueName: \"kubernetes.io/projected/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-kube-api-access-86nph\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.481015 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-config-data\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.481074 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv7m2\" (UniqueName: \"kubernetes.io/projected/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-kube-api-access-fv7m2\") pod \"nova-api-0\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.481137 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-logs\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.481194 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.483847 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac560e57-d991-4e2f-826b-136d7c6dc075-kube-api-access-fwjcs" (OuterVolumeSpecName: "kube-api-access-fwjcs") pod "ac560e57-d991-4e2f-826b-136d7c6dc075" (UID: "ac560e57-d991-4e2f-826b-136d7c6dc075"). InnerVolumeSpecName "kube-api-access-fwjcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.535826 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac560e57-d991-4e2f-826b-136d7c6dc075" (UID: "ac560e57-d991-4e2f-826b-136d7c6dc075"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.538075 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac560e57-d991-4e2f-826b-136d7c6dc075" (UID: "ac560e57-d991-4e2f-826b-136d7c6dc075"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.541362 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-config" (OuterVolumeSpecName: "config") pod "ac560e57-d991-4e2f-826b-136d7c6dc075" (UID: "ac560e57-d991-4e2f-826b-136d7c6dc075"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.558347 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac560e57-d991-4e2f-826b-136d7c6dc075" (UID: "ac560e57-d991-4e2f-826b-136d7c6dc075"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583164 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583208 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-logs\") pod \"nova-api-0\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583242 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-config-data\") pod \"nova-api-0\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583263 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583306 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86nph\" (UniqueName: \"kubernetes.io/projected/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-kube-api-access-86nph\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583353 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-config-data\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583377 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7m2\" (UniqueName: \"kubernetes.io/projected/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-kube-api-access-fv7m2\") pod \"nova-api-0\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583410 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-logs\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583442 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583497 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583508 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwjcs\" (UniqueName: \"kubernetes.io/projected/ac560e57-d991-4e2f-826b-136d7c6dc075-kube-api-access-fwjcs\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583517 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583526 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.583533 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac560e57-d991-4e2f-826b-136d7c6dc075-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.584234 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-logs\") pod \"nova-api-0\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.584729 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-logs\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.586768 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.590350 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-config-data\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.590562 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-config-data\") pod \"nova-api-0\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.591023 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.597681 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86nph\" (UniqueName: \"kubernetes.io/projected/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-kube-api-access-86nph\") pod \"nova-metadata-0\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.600143 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv7m2\" (UniqueName: \"kubernetes.io/projected/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-kube-api-access-fv7m2\") pod \"nova-api-0\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.603846 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.687424 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.696148 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.762797 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.887471 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-combined-ca-bundle\") pod \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.887611 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-scripts\") pod \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.887637 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmlvn\" (UniqueName: \"kubernetes.io/projected/cdc6156e-bdae-4cf2-a051-9c884bd592ca-kube-api-access-qmlvn\") pod \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.887705 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-config-data\") pod \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\" (UID: \"cdc6156e-bdae-4cf2-a051-9c884bd592ca\") " Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.903176 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-scripts" (OuterVolumeSpecName: "scripts") pod "cdc6156e-bdae-4cf2-a051-9c884bd592ca" (UID: "cdc6156e-bdae-4cf2-a051-9c884bd592ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.904157 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdc6156e-bdae-4cf2-a051-9c884bd592ca-kube-api-access-qmlvn" (OuterVolumeSpecName: "kube-api-access-qmlvn") pod "cdc6156e-bdae-4cf2-a051-9c884bd592ca" (UID: "cdc6156e-bdae-4cf2-a051-9c884bd592ca"). InnerVolumeSpecName "kube-api-access-qmlvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.976304 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdc6156e-bdae-4cf2-a051-9c884bd592ca" (UID: "cdc6156e-bdae-4cf2-a051-9c884bd592ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.980196 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-config-data" (OuterVolumeSpecName: "config-data") pod "cdc6156e-bdae-4cf2-a051-9c884bd592ca" (UID: "cdc6156e-bdae-4cf2-a051-9c884bd592ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.990739 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.990771 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.990783 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdc6156e-bdae-4cf2-a051-9c884bd592ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:12 crc kubenswrapper[5050]: I0131 05:43:12.990792 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmlvn\" (UniqueName: \"kubernetes.io/projected/cdc6156e-bdae-4cf2-a051-9c884bd592ca-kube-api-access-qmlvn\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.242112 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vthl5" event={"ID":"cdc6156e-bdae-4cf2-a051-9c884bd592ca","Type":"ContainerDied","Data":"298c475ccb4aba1cafd20e0e14639d78c09718235538ed7d9d31e3c890b0f107"} Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.242372 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="298c475ccb4aba1cafd20e0e14639d78c09718235538ed7d9d31e3c890b0f107" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.242369 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vthl5" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.244354 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" event={"ID":"ac560e57-d991-4e2f-826b-136d7c6dc075","Type":"ContainerDied","Data":"a99bed8e8460722ddf4d1b6ed9f4112e149bb9ab97f021f2b716951202d7bbc4"} Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.244395 5050 scope.go:117] "RemoveContainer" containerID="f59c109c088d566c5d6d1cda5458240500551948426de5c20e5d4bdd962cfcf3" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.244456 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-mqhmf" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.266464 5050 scope.go:117] "RemoveContainer" containerID="d15223da2b54567712405cac1546eedc57bec271bfe988a5626f6a0ab8f17f78" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.297122 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.313192 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 05:43:13 crc kubenswrapper[5050]: E0131 05:43:13.313788 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdc6156e-bdae-4cf2-a051-9c884bd592ca" containerName="nova-cell1-conductor-db-sync" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.313868 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdc6156e-bdae-4cf2-a051-9c884bd592ca" containerName="nova-cell1-conductor-db-sync" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.314101 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdc6156e-bdae-4cf2-a051-9c884bd592ca" containerName="nova-cell1-conductor-db-sync" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.314856 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.325769 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.327566 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.336074 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-mqhmf"] Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.343845 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-mqhmf"] Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.379572 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:13 crc kubenswrapper[5050]: W0131 05:43:13.383023 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a0ddaee_6080_4cef_b5d5_a470496ba5d4.slice/crio-8cbb6d75e88e828683bda5f7767e5291cc3a98913aa6b7e1b4614367c8919056 WatchSource:0}: Error finding container 8cbb6d75e88e828683bda5f7767e5291cc3a98913aa6b7e1b4614367c8919056: Status 404 returned error can't find the container with id 8cbb6d75e88e828683bda5f7767e5291cc3a98913aa6b7e1b4614367c8919056 Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.404288 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bnrk\" (UniqueName: \"kubernetes.io/projected/d059773b-c9a5-47db-aade-0f635664fe08-kube-api-access-6bnrk\") pod \"nova-cell1-conductor-0\" (UID: \"d059773b-c9a5-47db-aade-0f635664fe08\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.404331 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d059773b-c9a5-47db-aade-0f635664fe08-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d059773b-c9a5-47db-aade-0f635664fe08\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.404550 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d059773b-c9a5-47db-aade-0f635664fe08-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d059773b-c9a5-47db-aade-0f635664fe08\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.506378 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d059773b-c9a5-47db-aade-0f635664fe08-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d059773b-c9a5-47db-aade-0f635664fe08\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.506522 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bnrk\" (UniqueName: \"kubernetes.io/projected/d059773b-c9a5-47db-aade-0f635664fe08-kube-api-access-6bnrk\") pod \"nova-cell1-conductor-0\" (UID: \"d059773b-c9a5-47db-aade-0f635664fe08\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.506547 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d059773b-c9a5-47db-aade-0f635664fe08-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d059773b-c9a5-47db-aade-0f635664fe08\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.510141 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d059773b-c9a5-47db-aade-0f635664fe08-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d059773b-c9a5-47db-aade-0f635664fe08\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.518647 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d059773b-c9a5-47db-aade-0f635664fe08-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d059773b-c9a5-47db-aade-0f635664fe08\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.526637 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bnrk\" (UniqueName: \"kubernetes.io/projected/d059773b-c9a5-47db-aade-0f635664fe08-kube-api-access-6bnrk\") pod \"nova-cell1-conductor-0\" (UID: \"d059773b-c9a5-47db-aade-0f635664fe08\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.634077 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.759263 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd7f8b4-9c58-457a-8742-56fa84945fc6" path="/var/lib/kubelet/pods/6bd7f8b4-9c58-457a-8742-56fa84945fc6/volumes" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.760427 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac560e57-d991-4e2f-826b-136d7c6dc075" path="/var/lib/kubelet/pods/ac560e57-d991-4e2f-826b-136d7c6dc075/volumes" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.761206 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c440d3-994c-4739-a323-19201230b03a" path="/var/lib/kubelet/pods/f6c440d3-994c-4739-a323-19201230b03a/volumes" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.827411 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.914248 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38add41c-ac98-4032-afdc-492adcadac0a-combined-ca-bundle\") pod \"38add41c-ac98-4032-afdc-492adcadac0a\" (UID: \"38add41c-ac98-4032-afdc-492adcadac0a\") " Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.914440 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38add41c-ac98-4032-afdc-492adcadac0a-config-data\") pod \"38add41c-ac98-4032-afdc-492adcadac0a\" (UID: \"38add41c-ac98-4032-afdc-492adcadac0a\") " Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.914524 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xndk\" (UniqueName: \"kubernetes.io/projected/38add41c-ac98-4032-afdc-492adcadac0a-kube-api-access-4xndk\") pod \"38add41c-ac98-4032-afdc-492adcadac0a\" (UID: \"38add41c-ac98-4032-afdc-492adcadac0a\") " Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.921076 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38add41c-ac98-4032-afdc-492adcadac0a-kube-api-access-4xndk" (OuterVolumeSpecName: "kube-api-access-4xndk") pod "38add41c-ac98-4032-afdc-492adcadac0a" (UID: "38add41c-ac98-4032-afdc-492adcadac0a"). InnerVolumeSpecName "kube-api-access-4xndk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.938074 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38add41c-ac98-4032-afdc-492adcadac0a-config-data" (OuterVolumeSpecName: "config-data") pod "38add41c-ac98-4032-afdc-492adcadac0a" (UID: "38add41c-ac98-4032-afdc-492adcadac0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:13 crc kubenswrapper[5050]: I0131 05:43:13.986358 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38add41c-ac98-4032-afdc-492adcadac0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38add41c-ac98-4032-afdc-492adcadac0a" (UID: "38add41c-ac98-4032-afdc-492adcadac0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.030928 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xndk\" (UniqueName: \"kubernetes.io/projected/38add41c-ac98-4032-afdc-492adcadac0a-kube-api-access-4xndk\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.031062 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38add41c-ac98-4032-afdc-492adcadac0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.031083 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38add41c-ac98-4032-afdc-492adcadac0a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:14 crc kubenswrapper[5050]: W0131 05:43:14.091733 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd059773b_c9a5_47db_aade_0f635664fe08.slice/crio-595e947e166f86e5917d4da65f4c9d8d9f4170ef58a75a21688aa8f1d845a90e WatchSource:0}: Error finding container 595e947e166f86e5917d4da65f4c9d8d9f4170ef58a75a21688aa8f1d845a90e: Status 404 returned error can't find the container with id 595e947e166f86e5917d4da65f4c9d8d9f4170ef58a75a21688aa8f1d845a90e Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.094768 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.265045 5050 generic.go:334] "Generic (PLEG): container finished" podID="38add41c-ac98-4032-afdc-492adcadac0a" containerID="6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836" exitCode=0 Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.265096 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"38add41c-ac98-4032-afdc-492adcadac0a","Type":"ContainerDied","Data":"6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836"} Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.265122 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"38add41c-ac98-4032-afdc-492adcadac0a","Type":"ContainerDied","Data":"89a4f7ce787348ca0e79139ff8f8f1ea470da763945c6caa0f31db1e6c20718e"} Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.265138 5050 scope.go:117] "RemoveContainer" containerID="6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.265212 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.275167 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d059773b-c9a5-47db-aade-0f635664fe08","Type":"ContainerStarted","Data":"6e4193fa28d4f4b377635551eb49c843f9502b4022fd03efb5e67f3af20eb300"} Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.275223 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d059773b-c9a5-47db-aade-0f635664fe08","Type":"ContainerStarted","Data":"595e947e166f86e5917d4da65f4c9d8d9f4170ef58a75a21688aa8f1d845a90e"} Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.275248 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.286620 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9cb357-6854-412f-8fe6-d7c4404ecbc9","Type":"ContainerStarted","Data":"1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87"} Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.286663 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9cb357-6854-412f-8fe6-d7c4404ecbc9","Type":"ContainerStarted","Data":"c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5"} Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.286674 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9cb357-6854-412f-8fe6-d7c4404ecbc9","Type":"ContainerStarted","Data":"0098a03baa2fb01c110b51833feea00cf681754187e8967198b0d0b345552748"} Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.297673 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a0ddaee-6080-4cef-b5d5-a470496ba5d4","Type":"ContainerStarted","Data":"189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7"} Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.297725 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a0ddaee-6080-4cef-b5d5-a470496ba5d4","Type":"ContainerStarted","Data":"faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3"} Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.297736 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a0ddaee-6080-4cef-b5d5-a470496ba5d4","Type":"ContainerStarted","Data":"8cbb6d75e88e828683bda5f7767e5291cc3a98913aa6b7e1b4614367c8919056"} Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.300663 5050 scope.go:117] "RemoveContainer" containerID="6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836" Jan 31 05:43:14 crc kubenswrapper[5050]: E0131 05:43:14.301094 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836\": container with ID starting with 6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836 not found: ID does not exist" containerID="6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.301126 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836"} err="failed to get container status \"6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836\": rpc error: code = NotFound desc = could not find container \"6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836\": container with ID starting with 6ad41289e82550729ded2acc9e8e45cb7af209bbe6ba58822fadce0107c05836 not found: ID does not exist" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.301767 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.301745971 podStartE2EDuration="1.301745971s" podCreationTimestamp="2026-01-31 05:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:43:14.294218895 +0000 UTC m=+1319.343380491" watchObservedRunningTime="2026-01-31 05:43:14.301745971 +0000 UTC m=+1319.350907567" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.331977 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.331943912 podStartE2EDuration="2.331943912s" podCreationTimestamp="2026-01-31 05:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:43:14.320785958 +0000 UTC m=+1319.369947584" watchObservedRunningTime="2026-01-31 05:43:14.331943912 +0000 UTC m=+1319.381105508" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.342321 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.351562 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.360400 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:14 crc kubenswrapper[5050]: E0131 05:43:14.360873 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38add41c-ac98-4032-afdc-492adcadac0a" containerName="nova-scheduler-scheduler" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.360892 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="38add41c-ac98-4032-afdc-492adcadac0a" containerName="nova-scheduler-scheduler" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.361122 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="38add41c-ac98-4032-afdc-492adcadac0a" containerName="nova-scheduler-scheduler" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.361713 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.364731 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.368038 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.368016673 podStartE2EDuration="2.368016673s" podCreationTimestamp="2026-01-31 05:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:43:14.355602696 +0000 UTC m=+1319.404764292" watchObservedRunningTime="2026-01-31 05:43:14.368016673 +0000 UTC m=+1319.417178269" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.386061 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.541512 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.541642 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-config-data\") pod \"nova-scheduler-0\" (UID: \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.541974 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-427v6\" (UniqueName: \"kubernetes.io/projected/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-kube-api-access-427v6\") pod \"nova-scheduler-0\" (UID: \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.644047 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.644126 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-config-data\") pod \"nova-scheduler-0\" (UID: \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.644224 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-427v6\" (UniqueName: \"kubernetes.io/projected/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-kube-api-access-427v6\") pod \"nova-scheduler-0\" (UID: \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.651108 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-config-data\") pod \"nova-scheduler-0\" (UID: \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.658089 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.660745 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-427v6\" (UniqueName: \"kubernetes.io/projected/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-kube-api-access-427v6\") pod \"nova-scheduler-0\" (UID: \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.686480 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:43:14 crc kubenswrapper[5050]: I0131 05:43:14.967339 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:14 crc kubenswrapper[5050]: W0131 05:43:14.969028 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbdeb55b_9447_49ac_ac5e_6264ac54cb35.slice/crio-60759327b396bdf0b2271f0d06c81e9c89087f0756c670a9d3e105bf5824d1fb WatchSource:0}: Error finding container 60759327b396bdf0b2271f0d06c81e9c89087f0756c670a9d3e105bf5824d1fb: Status 404 returned error can't find the container with id 60759327b396bdf0b2271f0d06c81e9c89087f0756c670a9d3e105bf5824d1fb Jan 31 05:43:15 crc kubenswrapper[5050]: I0131 05:43:15.316589 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bbdeb55b-9447-49ac-ac5e-6264ac54cb35","Type":"ContainerStarted","Data":"96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b"} Jan 31 05:43:15 crc kubenswrapper[5050]: I0131 05:43:15.316630 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bbdeb55b-9447-49ac-ac5e-6264ac54cb35","Type":"ContainerStarted","Data":"60759327b396bdf0b2271f0d06c81e9c89087f0756c670a9d3e105bf5824d1fb"} Jan 31 05:43:15 crc kubenswrapper[5050]: I0131 05:43:15.349623 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.34960094 podStartE2EDuration="1.34960094s" podCreationTimestamp="2026-01-31 05:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:43:15.345781816 +0000 UTC m=+1320.394943432" watchObservedRunningTime="2026-01-31 05:43:15.34960094 +0000 UTC m=+1320.398762546" Jan 31 05:43:15 crc kubenswrapper[5050]: I0131 05:43:15.763127 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38add41c-ac98-4032-afdc-492adcadac0a" path="/var/lib/kubelet/pods/38add41c-ac98-4032-afdc-492adcadac0a/volumes" Jan 31 05:43:17 crc kubenswrapper[5050]: I0131 05:43:17.688667 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 05:43:17 crc kubenswrapper[5050]: I0131 05:43:17.688820 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 05:43:19 crc kubenswrapper[5050]: I0131 05:43:19.687516 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 05:43:20 crc kubenswrapper[5050]: I0131 05:43:20.268539 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 05:43:22 crc kubenswrapper[5050]: I0131 05:43:22.688466 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 05:43:22 crc kubenswrapper[5050]: I0131 05:43:22.688736 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 05:43:22 crc kubenswrapper[5050]: I0131 05:43:22.696711 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 05:43:22 crc kubenswrapper[5050]: I0131 05:43:22.696748 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 05:43:22 crc kubenswrapper[5050]: I0131 05:43:22.999387 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:43:22 crc kubenswrapper[5050]: I0131 05:43:22.999632 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0afb5f3d-b148-46fd-9867-071aafa5adff" containerName="kube-state-metrics" containerID="cri-o://196feacae87f155f9194935ad93031f3dc66d064da77e65a5f9c4293ace3b7af" gracePeriod=30 Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.421573 5050 generic.go:334] "Generic (PLEG): container finished" podID="0afb5f3d-b148-46fd-9867-071aafa5adff" containerID="196feacae87f155f9194935ad93031f3dc66d064da77e65a5f9c4293ace3b7af" exitCode=2 Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.421675 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0afb5f3d-b148-46fd-9867-071aafa5adff","Type":"ContainerDied","Data":"196feacae87f155f9194935ad93031f3dc66d064da77e65a5f9c4293ace3b7af"} Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.421889 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0afb5f3d-b148-46fd-9867-071aafa5adff","Type":"ContainerDied","Data":"ffbe3b5ec81ac721edc51f7504fb1f8e30215cd65261227559da489b2db2eff9"} Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.421906 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffbe3b5ec81ac721edc51f7504fb1f8e30215cd65261227559da489b2db2eff9" Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.468050 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.520450 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnxsz\" (UniqueName: \"kubernetes.io/projected/0afb5f3d-b148-46fd-9867-071aafa5adff-kube-api-access-wnxsz\") pod \"0afb5f3d-b148-46fd-9867-071aafa5adff\" (UID: \"0afb5f3d-b148-46fd-9867-071aafa5adff\") " Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.529837 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0afb5f3d-b148-46fd-9867-071aafa5adff-kube-api-access-wnxsz" (OuterVolumeSpecName: "kube-api-access-wnxsz") pod "0afb5f3d-b148-46fd-9867-071aafa5adff" (UID: "0afb5f3d-b148-46fd-9867-071aafa5adff"). InnerVolumeSpecName "kube-api-access-wnxsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.622018 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnxsz\" (UniqueName: \"kubernetes.io/projected/0afb5f3d-b148-46fd-9867-071aafa5adff-kube-api-access-wnxsz\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.669027 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.793327 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.180:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.793337 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.793394 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:43:23 crc kubenswrapper[5050]: I0131 05:43:23.793482 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.180:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.098291 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.098537 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="ceilometer-central-agent" containerID="cri-o://f23e34a4261dda895547ff3c4461dcacd4af397476c9e324ab03d89b398ae469" gracePeriod=30 Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.098949 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="proxy-httpd" containerID="cri-o://a73f158801d8f0e89fec5036782ccedc80038638f5eb3c1df68ee1ed09335db2" gracePeriod=30 Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.099013 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="sg-core" containerID="cri-o://c375e0d99eed681e5e248d40878ee54b0ccb5f36c4ad918a437e25ae9612bab7" gracePeriod=30 Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.099043 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="ceilometer-notification-agent" containerID="cri-o://e2686047e3546949020b7d93ad7197b6de81984c54bddb2b19cbbb46af6bac1f" gracePeriod=30 Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.433251 5050 generic.go:334] "Generic (PLEG): container finished" podID="36143374-28fb-4560-97c8-11f65509228e" containerID="a73f158801d8f0e89fec5036782ccedc80038638f5eb3c1df68ee1ed09335db2" exitCode=0 Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.433296 5050 generic.go:334] "Generic (PLEG): container finished" podID="36143374-28fb-4560-97c8-11f65509228e" containerID="c375e0d99eed681e5e248d40878ee54b0ccb5f36c4ad918a437e25ae9612bab7" exitCode=2 Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.433302 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36143374-28fb-4560-97c8-11f65509228e","Type":"ContainerDied","Data":"a73f158801d8f0e89fec5036782ccedc80038638f5eb3c1df68ee1ed09335db2"} Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.433346 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36143374-28fb-4560-97c8-11f65509228e","Type":"ContainerDied","Data":"c375e0d99eed681e5e248d40878ee54b0ccb5f36c4ad918a437e25ae9612bab7"} Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.433378 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.455386 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.464905 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.474237 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:43:24 crc kubenswrapper[5050]: E0131 05:43:24.474642 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afb5f3d-b148-46fd-9867-071aafa5adff" containerName="kube-state-metrics" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.474660 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afb5f3d-b148-46fd-9867-071aafa5adff" containerName="kube-state-metrics" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.475033 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0afb5f3d-b148-46fd-9867-071aafa5adff" containerName="kube-state-metrics" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.475676 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.477929 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.478230 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.490806 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.536416 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c25d40-feaf-4c93-b249-64fe546d1e05-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"03c25d40-feaf-4c93-b249-64fe546d1e05\") " pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.536477 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c25d40-feaf-4c93-b249-64fe546d1e05-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"03c25d40-feaf-4c93-b249-64fe546d1e05\") " pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.536523 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/03c25d40-feaf-4c93-b249-64fe546d1e05-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"03c25d40-feaf-4c93-b249-64fe546d1e05\") " pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.536561 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9p52\" (UniqueName: \"kubernetes.io/projected/03c25d40-feaf-4c93-b249-64fe546d1e05-kube-api-access-t9p52\") pod \"kube-state-metrics-0\" (UID: \"03c25d40-feaf-4c93-b249-64fe546d1e05\") " pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.639768 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c25d40-feaf-4c93-b249-64fe546d1e05-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"03c25d40-feaf-4c93-b249-64fe546d1e05\") " pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.639879 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c25d40-feaf-4c93-b249-64fe546d1e05-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"03c25d40-feaf-4c93-b249-64fe546d1e05\") " pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.639984 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/03c25d40-feaf-4c93-b249-64fe546d1e05-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"03c25d40-feaf-4c93-b249-64fe546d1e05\") " pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.640076 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9p52\" (UniqueName: \"kubernetes.io/projected/03c25d40-feaf-4c93-b249-64fe546d1e05-kube-api-access-t9p52\") pod \"kube-state-metrics-0\" (UID: \"03c25d40-feaf-4c93-b249-64fe546d1e05\") " pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.644853 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c25d40-feaf-4c93-b249-64fe546d1e05-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"03c25d40-feaf-4c93-b249-64fe546d1e05\") " pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.647435 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/03c25d40-feaf-4c93-b249-64fe546d1e05-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"03c25d40-feaf-4c93-b249-64fe546d1e05\") " pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.648541 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c25d40-feaf-4c93-b249-64fe546d1e05-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"03c25d40-feaf-4c93-b249-64fe546d1e05\") " pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.658669 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9p52\" (UniqueName: \"kubernetes.io/projected/03c25d40-feaf-4c93-b249-64fe546d1e05-kube-api-access-t9p52\") pod \"kube-state-metrics-0\" (UID: \"03c25d40-feaf-4c93-b249-64fe546d1e05\") " pod="openstack/kube-state-metrics-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.687549 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.713791 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 05:43:24 crc kubenswrapper[5050]: I0131 05:43:24.800526 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 05:43:25 crc kubenswrapper[5050]: I0131 05:43:25.268940 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:43:25 crc kubenswrapper[5050]: I0131 05:43:25.472014 5050 generic.go:334] "Generic (PLEG): container finished" podID="36143374-28fb-4560-97c8-11f65509228e" containerID="f23e34a4261dda895547ff3c4461dcacd4af397476c9e324ab03d89b398ae469" exitCode=0 Jan 31 05:43:25 crc kubenswrapper[5050]: I0131 05:43:25.472150 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36143374-28fb-4560-97c8-11f65509228e","Type":"ContainerDied","Data":"f23e34a4261dda895547ff3c4461dcacd4af397476c9e324ab03d89b398ae469"} Jan 31 05:43:25 crc kubenswrapper[5050]: I0131 05:43:25.474029 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"03c25d40-feaf-4c93-b249-64fe546d1e05","Type":"ContainerStarted","Data":"bf956cb71a5fbc30c7d79bfe5d324a0a9f22876539a20313aab1459807d8bcff"} Jan 31 05:43:25 crc kubenswrapper[5050]: I0131 05:43:25.520082 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 05:43:25 crc kubenswrapper[5050]: I0131 05:43:25.777165 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0afb5f3d-b148-46fd-9867-071aafa5adff" path="/var/lib/kubelet/pods/0afb5f3d-b148-46fd-9867-071aafa5adff/volumes" Jan 31 05:43:26 crc kubenswrapper[5050]: I0131 05:43:26.485121 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"03c25d40-feaf-4c93-b249-64fe546d1e05","Type":"ContainerStarted","Data":"9b636f51f4ebff11603c93b8c0357f0fc231c668ef1c5c5d6b8bf900099dfdf3"} Jan 31 05:43:26 crc kubenswrapper[5050]: I0131 05:43:26.485735 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 05:43:26 crc kubenswrapper[5050]: I0131 05:43:26.513123 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.09048921 podStartE2EDuration="2.513102539s" podCreationTimestamp="2026-01-31 05:43:24 +0000 UTC" firstStartedPulling="2026-01-31 05:43:25.278738944 +0000 UTC m=+1330.327900550" lastFinishedPulling="2026-01-31 05:43:25.701352283 +0000 UTC m=+1330.750513879" observedRunningTime="2026-01-31 05:43:26.502477951 +0000 UTC m=+1331.551639557" watchObservedRunningTime="2026-01-31 05:43:26.513102539 +0000 UTC m=+1331.562264125" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.507946 5050 generic.go:334] "Generic (PLEG): container finished" podID="36143374-28fb-4560-97c8-11f65509228e" containerID="e2686047e3546949020b7d93ad7197b6de81984c54bddb2b19cbbb46af6bac1f" exitCode=0 Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.508155 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36143374-28fb-4560-97c8-11f65509228e","Type":"ContainerDied","Data":"e2686047e3546949020b7d93ad7197b6de81984c54bddb2b19cbbb46af6bac1f"} Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.642177 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.749739 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-scripts\") pod \"36143374-28fb-4560-97c8-11f65509228e\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.749858 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-sg-core-conf-yaml\") pod \"36143374-28fb-4560-97c8-11f65509228e\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.749970 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36143374-28fb-4560-97c8-11f65509228e-log-httpd\") pod \"36143374-28fb-4560-97c8-11f65509228e\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.750011 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-combined-ca-bundle\") pod \"36143374-28fb-4560-97c8-11f65509228e\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.750079 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-config-data\") pod \"36143374-28fb-4560-97c8-11f65509228e\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.750128 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36143374-28fb-4560-97c8-11f65509228e-run-httpd\") pod \"36143374-28fb-4560-97c8-11f65509228e\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.750261 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvdg4\" (UniqueName: \"kubernetes.io/projected/36143374-28fb-4560-97c8-11f65509228e-kube-api-access-cvdg4\") pod \"36143374-28fb-4560-97c8-11f65509228e\" (UID: \"36143374-28fb-4560-97c8-11f65509228e\") " Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.750465 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36143374-28fb-4560-97c8-11f65509228e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "36143374-28fb-4560-97c8-11f65509228e" (UID: "36143374-28fb-4560-97c8-11f65509228e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.750721 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36143374-28fb-4560-97c8-11f65509228e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.750900 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36143374-28fb-4560-97c8-11f65509228e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "36143374-28fb-4560-97c8-11f65509228e" (UID: "36143374-28fb-4560-97c8-11f65509228e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.756749 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-scripts" (OuterVolumeSpecName: "scripts") pod "36143374-28fb-4560-97c8-11f65509228e" (UID: "36143374-28fb-4560-97c8-11f65509228e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.756998 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36143374-28fb-4560-97c8-11f65509228e-kube-api-access-cvdg4" (OuterVolumeSpecName: "kube-api-access-cvdg4") pod "36143374-28fb-4560-97c8-11f65509228e" (UID: "36143374-28fb-4560-97c8-11f65509228e"). InnerVolumeSpecName "kube-api-access-cvdg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.791583 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "36143374-28fb-4560-97c8-11f65509228e" (UID: "36143374-28fb-4560-97c8-11f65509228e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.852116 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvdg4\" (UniqueName: \"kubernetes.io/projected/36143374-28fb-4560-97c8-11f65509228e-kube-api-access-cvdg4\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.852148 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.852161 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.852173 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36143374-28fb-4560-97c8-11f65509228e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.853098 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36143374-28fb-4560-97c8-11f65509228e" (UID: "36143374-28fb-4560-97c8-11f65509228e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.873304 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-config-data" (OuterVolumeSpecName: "config-data") pod "36143374-28fb-4560-97c8-11f65509228e" (UID: "36143374-28fb-4560-97c8-11f65509228e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.954492 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:28 crc kubenswrapper[5050]: I0131 05:43:28.954541 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36143374-28fb-4560-97c8-11f65509228e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.518556 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36143374-28fb-4560-97c8-11f65509228e","Type":"ContainerDied","Data":"9f3b0446c7f7ca0759ee0a2c76f15e55759c2a4652e8d3993e102352e5164ca7"} Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.518610 5050 scope.go:117] "RemoveContainer" containerID="a73f158801d8f0e89fec5036782ccedc80038638f5eb3c1df68ee1ed09335db2" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.518634 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.535792 5050 scope.go:117] "RemoveContainer" containerID="c375e0d99eed681e5e248d40878ee54b0ccb5f36c4ad918a437e25ae9612bab7" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.562414 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.570571 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.574749 5050 scope.go:117] "RemoveContainer" containerID="e2686047e3546949020b7d93ad7197b6de81984c54bddb2b19cbbb46af6bac1f" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.603692 5050 scope.go:117] "RemoveContainer" containerID="f23e34a4261dda895547ff3c4461dcacd4af397476c9e324ab03d89b398ae469" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.647705 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:43:29 crc kubenswrapper[5050]: E0131 05:43:29.648643 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="sg-core" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.648679 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="sg-core" Jan 31 05:43:29 crc kubenswrapper[5050]: E0131 05:43:29.648756 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="proxy-httpd" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.648771 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="proxy-httpd" Jan 31 05:43:29 crc kubenswrapper[5050]: E0131 05:43:29.648799 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="ceilometer-central-agent" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.648810 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="ceilometer-central-agent" Jan 31 05:43:29 crc kubenswrapper[5050]: E0131 05:43:29.648837 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="ceilometer-notification-agent" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.648848 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="ceilometer-notification-agent" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.649235 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="proxy-httpd" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.649262 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="ceilometer-central-agent" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.649285 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="sg-core" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.649302 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="36143374-28fb-4560-97c8-11f65509228e" containerName="ceilometer-notification-agent" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.652081 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.655212 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.655473 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.656390 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.660837 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.750407 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36143374-28fb-4560-97c8-11f65509228e" path="/var/lib/kubelet/pods/36143374-28fb-4560-97c8-11f65509228e/volumes" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.771822 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83312dde-420c-46ad-b310-3a115fa347f7-run-httpd\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.771874 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83312dde-420c-46ad-b310-3a115fa347f7-log-httpd\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.771900 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcxtx\" (UniqueName: \"kubernetes.io/projected/83312dde-420c-46ad-b310-3a115fa347f7-kube-api-access-vcxtx\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.771919 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.771935 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-scripts\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.771974 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.772093 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-config-data\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.772151 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.873393 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.873491 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83312dde-420c-46ad-b310-3a115fa347f7-run-httpd\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.873541 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83312dde-420c-46ad-b310-3a115fa347f7-log-httpd\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.873567 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcxtx\" (UniqueName: \"kubernetes.io/projected/83312dde-420c-46ad-b310-3a115fa347f7-kube-api-access-vcxtx\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.873599 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.873615 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-scripts\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.873656 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.873687 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-config-data\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.874084 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83312dde-420c-46ad-b310-3a115fa347f7-run-httpd\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.874171 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83312dde-420c-46ad-b310-3a115fa347f7-log-httpd\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.880611 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-config-data\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.881384 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.881888 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-scripts\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.883707 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.890087 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.892818 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcxtx\" (UniqueName: \"kubernetes.io/projected/83312dde-420c-46ad-b310-3a115fa347f7-kube-api-access-vcxtx\") pod \"ceilometer-0\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " pod="openstack/ceilometer-0" Jan 31 05:43:29 crc kubenswrapper[5050]: I0131 05:43:29.983529 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:43:30 crc kubenswrapper[5050]: I0131 05:43:30.511127 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:43:31 crc kubenswrapper[5050]: I0131 05:43:31.543448 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83312dde-420c-46ad-b310-3a115fa347f7","Type":"ContainerStarted","Data":"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda"} Jan 31 05:43:31 crc kubenswrapper[5050]: I0131 05:43:31.543690 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83312dde-420c-46ad-b310-3a115fa347f7","Type":"ContainerStarted","Data":"399d659d2457e1d85bf46d39f1b69b2140a8b9e0decee54093a69e99ae7306e1"} Jan 31 05:43:32 crc kubenswrapper[5050]: I0131 05:43:32.555791 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83312dde-420c-46ad-b310-3a115fa347f7","Type":"ContainerStarted","Data":"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0"} Jan 31 05:43:32 crc kubenswrapper[5050]: I0131 05:43:32.699058 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 05:43:32 crc kubenswrapper[5050]: I0131 05:43:32.703307 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 05:43:32 crc kubenswrapper[5050]: I0131 05:43:32.704545 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 05:43:32 crc kubenswrapper[5050]: I0131 05:43:32.704645 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 05:43:32 crc kubenswrapper[5050]: I0131 05:43:32.713092 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 05:43:32 crc kubenswrapper[5050]: I0131 05:43:32.716501 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 05:43:32 crc kubenswrapper[5050]: I0131 05:43:32.744138 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 05:43:33 crc kubenswrapper[5050]: I0131 05:43:33.586084 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83312dde-420c-46ad-b310-3a115fa347f7","Type":"ContainerStarted","Data":"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f"} Jan 31 05:43:33 crc kubenswrapper[5050]: I0131 05:43:33.588055 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 05:43:33 crc kubenswrapper[5050]: I0131 05:43:33.596489 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 05:43:33 crc kubenswrapper[5050]: I0131 05:43:33.620277 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 05:43:33 crc kubenswrapper[5050]: I0131 05:43:33.823129 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-lpqj8"] Jan 31 05:43:33 crc kubenswrapper[5050]: I0131 05:43:33.824944 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:33 crc kubenswrapper[5050]: I0131 05:43:33.846818 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-lpqj8"] Jan 31 05:43:33 crc kubenswrapper[5050]: I0131 05:43:33.945780 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-config\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:33 crc kubenswrapper[5050]: I0131 05:43:33.945865 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6fwn\" (UniqueName: \"kubernetes.io/projected/c744dd82-741e-4835-90e2-454ad9587ff0-kube-api-access-w6fwn\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:33 crc kubenswrapper[5050]: I0131 05:43:33.945895 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:33 crc kubenswrapper[5050]: I0131 05:43:33.945909 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:33 crc kubenswrapper[5050]: I0131 05:43:33.945943 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-dns-svc\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.047888 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-config\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.047985 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6fwn\" (UniqueName: \"kubernetes.io/projected/c744dd82-741e-4835-90e2-454ad9587ff0-kube-api-access-w6fwn\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.048011 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.048027 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.048056 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-dns-svc\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.048815 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-config\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.048845 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-dns-svc\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.048902 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.049077 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.086809 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6fwn\" (UniqueName: \"kubernetes.io/projected/c744dd82-741e-4835-90e2-454ad9587ff0-kube-api-access-w6fwn\") pod \"dnsmasq-dns-5b856c5697-lpqj8\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.145838 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.606275 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-lpqj8"] Jan 31 05:43:34 crc kubenswrapper[5050]: I0131 05:43:34.810632 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 05:43:35 crc kubenswrapper[5050]: I0131 05:43:35.603031 5050 generic.go:334] "Generic (PLEG): container finished" podID="c744dd82-741e-4835-90e2-454ad9587ff0" containerID="fa9c86064586361e8eedbe4c5082a235c09cf8c7ff7b43285dabca8ddd8e2450" exitCode=0 Jan 31 05:43:35 crc kubenswrapper[5050]: I0131 05:43:35.603109 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" event={"ID":"c744dd82-741e-4835-90e2-454ad9587ff0","Type":"ContainerDied","Data":"fa9c86064586361e8eedbe4c5082a235c09cf8c7ff7b43285dabca8ddd8e2450"} Jan 31 05:43:35 crc kubenswrapper[5050]: I0131 05:43:35.603342 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" event={"ID":"c744dd82-741e-4835-90e2-454ad9587ff0","Type":"ContainerStarted","Data":"3440c42d36482494ee5b2963120775f84e70dbbc8aac349b7a646366505d87ca"} Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.268578 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.460558 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.480382 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.610031 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-config-data\") pod \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\" (UID: \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\") " Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.610068 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtw7r\" (UniqueName: \"kubernetes.io/projected/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-kube-api-access-jtw7r\") pod \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\" (UID: \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\") " Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.610122 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-combined-ca-bundle\") pod \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\" (UID: \"b8a3ee49-050c-40f4-92fe-38dd438ee2ca\") " Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.610594 5050 generic.go:334] "Generic (PLEG): container finished" podID="b8a3ee49-050c-40f4-92fe-38dd438ee2ca" containerID="e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089" exitCode=137 Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.610703 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.610756 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b8a3ee49-050c-40f4-92fe-38dd438ee2ca","Type":"ContainerDied","Data":"e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089"} Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.610788 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b8a3ee49-050c-40f4-92fe-38dd438ee2ca","Type":"ContainerDied","Data":"0f6e19ea98b8eaa7db453b33596a64e2f81d37e76020176388c9073ebd45f5d7"} Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.610804 5050 scope.go:117] "RemoveContainer" containerID="e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.614488 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" event={"ID":"c744dd82-741e-4835-90e2-454ad9587ff0","Type":"ContainerStarted","Data":"45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1"} Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.614841 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.616047 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-kube-api-access-jtw7r" (OuterVolumeSpecName: "kube-api-access-jtw7r") pod "b8a3ee49-050c-40f4-92fe-38dd438ee2ca" (UID: "b8a3ee49-050c-40f4-92fe-38dd438ee2ca"). InnerVolumeSpecName "kube-api-access-jtw7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.640208 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-config-data" (OuterVolumeSpecName: "config-data") pod "b8a3ee49-050c-40f4-92fe-38dd438ee2ca" (UID: "b8a3ee49-050c-40f4-92fe-38dd438ee2ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.642597 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" containerName="nova-api-log" containerID="cri-o://faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3" gracePeriod=30 Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.643500 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83312dde-420c-46ad-b310-3a115fa347f7","Type":"ContainerStarted","Data":"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57"} Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.643527 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.644532 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" containerName="nova-api-api" containerID="cri-o://189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7" gracePeriod=30 Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.647879 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" podStartSLOduration=3.647865479 podStartE2EDuration="3.647865479s" podCreationTimestamp="2026-01-31 05:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:43:36.646510452 +0000 UTC m=+1341.695672048" watchObservedRunningTime="2026-01-31 05:43:36.647865479 +0000 UTC m=+1341.697027075" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.653098 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8a3ee49-050c-40f4-92fe-38dd438ee2ca" (UID: "b8a3ee49-050c-40f4-92fe-38dd438ee2ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.677299 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.829456848 podStartE2EDuration="7.677281259s" podCreationTimestamp="2026-01-31 05:43:29 +0000 UTC" firstStartedPulling="2026-01-31 05:43:30.524904894 +0000 UTC m=+1335.574066500" lastFinishedPulling="2026-01-31 05:43:35.372729315 +0000 UTC m=+1340.421890911" observedRunningTime="2026-01-31 05:43:36.672325465 +0000 UTC m=+1341.721487071" watchObservedRunningTime="2026-01-31 05:43:36.677281259 +0000 UTC m=+1341.726442855" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.678090 5050 scope.go:117] "RemoveContainer" containerID="e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089" Jan 31 05:43:36 crc kubenswrapper[5050]: E0131 05:43:36.682051 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089\": container with ID starting with e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089 not found: ID does not exist" containerID="e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.682089 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089"} err="failed to get container status \"e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089\": rpc error: code = NotFound desc = could not find container \"e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089\": container with ID starting with e014f3a600d181b85b9950c7abc60e241aaccb66b8e90ef2cc7daadb5c4b7089 not found: ID does not exist" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.712382 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.712420 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtw7r\" (UniqueName: \"kubernetes.io/projected/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-kube-api-access-jtw7r\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.712431 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8a3ee49-050c-40f4-92fe-38dd438ee2ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.938360 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.951154 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.971400 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:43:36 crc kubenswrapper[5050]: E0131 05:43:36.971817 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a3ee49-050c-40f4-92fe-38dd438ee2ca" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.971828 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a3ee49-050c-40f4-92fe-38dd438ee2ca" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.972013 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a3ee49-050c-40f4-92fe-38dd438ee2ca" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.972547 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.980337 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.980622 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.980731 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 05:43:36 crc kubenswrapper[5050]: I0131 05:43:36.984872 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.122769 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.122830 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.122981 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.123049 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zwdq\" (UniqueName: \"kubernetes.io/projected/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-kube-api-access-7zwdq\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.123091 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.224506 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.224601 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zwdq\" (UniqueName: \"kubernetes.io/projected/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-kube-api-access-7zwdq\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.224641 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.224714 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.224745 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.228316 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.229019 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.229294 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.239902 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.244992 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zwdq\" (UniqueName: \"kubernetes.io/projected/6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f-kube-api-access-7zwdq\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.294491 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.710315 5050 generic.go:334] "Generic (PLEG): container finished" podID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" containerID="faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3" exitCode=143 Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.711021 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a0ddaee-6080-4cef-b5d5-a470496ba5d4","Type":"ContainerDied","Data":"faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3"} Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.711155 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="ceilometer-central-agent" containerID="cri-o://6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda" gracePeriod=30 Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.711254 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="proxy-httpd" containerID="cri-o://5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57" gracePeriod=30 Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.711316 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="sg-core" containerID="cri-o://cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f" gracePeriod=30 Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.711347 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="ceilometer-notification-agent" containerID="cri-o://98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0" gracePeriod=30 Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.752259 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a3ee49-050c-40f4-92fe-38dd438ee2ca" path="/var/lib/kubelet/pods/b8a3ee49-050c-40f4-92fe-38dd438ee2ca/volumes" Jan 31 05:43:37 crc kubenswrapper[5050]: I0131 05:43:37.844626 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:43:37 crc kubenswrapper[5050]: W0131 05:43:37.848498 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e64bf1b_ff3c_4c6f_baa6_9737fd893d5f.slice/crio-16cebee142f268fe04df38cfbaae76c58d46b8923f696fcceac1e16ae43821bb WatchSource:0}: Error finding container 16cebee142f268fe04df38cfbaae76c58d46b8923f696fcceac1e16ae43821bb: Status 404 returned error can't find the container with id 16cebee142f268fe04df38cfbaae76c58d46b8923f696fcceac1e16ae43821bb Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.533811 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.649365 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-scripts\") pod \"83312dde-420c-46ad-b310-3a115fa347f7\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.649425 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83312dde-420c-46ad-b310-3a115fa347f7-run-httpd\") pod \"83312dde-420c-46ad-b310-3a115fa347f7\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.649507 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-combined-ca-bundle\") pod \"83312dde-420c-46ad-b310-3a115fa347f7\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.649557 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-config-data\") pod \"83312dde-420c-46ad-b310-3a115fa347f7\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.649697 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83312dde-420c-46ad-b310-3a115fa347f7-log-httpd\") pod \"83312dde-420c-46ad-b310-3a115fa347f7\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.649726 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-sg-core-conf-yaml\") pod \"83312dde-420c-46ad-b310-3a115fa347f7\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.649772 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-ceilometer-tls-certs\") pod \"83312dde-420c-46ad-b310-3a115fa347f7\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.649821 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcxtx\" (UniqueName: \"kubernetes.io/projected/83312dde-420c-46ad-b310-3a115fa347f7-kube-api-access-vcxtx\") pod \"83312dde-420c-46ad-b310-3a115fa347f7\" (UID: \"83312dde-420c-46ad-b310-3a115fa347f7\") " Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.651121 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83312dde-420c-46ad-b310-3a115fa347f7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "83312dde-420c-46ad-b310-3a115fa347f7" (UID: "83312dde-420c-46ad-b310-3a115fa347f7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.660244 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83312dde-420c-46ad-b310-3a115fa347f7-kube-api-access-vcxtx" (OuterVolumeSpecName: "kube-api-access-vcxtx") pod "83312dde-420c-46ad-b310-3a115fa347f7" (UID: "83312dde-420c-46ad-b310-3a115fa347f7"). InnerVolumeSpecName "kube-api-access-vcxtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.660256 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-scripts" (OuterVolumeSpecName: "scripts") pod "83312dde-420c-46ad-b310-3a115fa347f7" (UID: "83312dde-420c-46ad-b310-3a115fa347f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.661309 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83312dde-420c-46ad-b310-3a115fa347f7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "83312dde-420c-46ad-b310-3a115fa347f7" (UID: "83312dde-420c-46ad-b310-3a115fa347f7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.692498 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "83312dde-420c-46ad-b310-3a115fa347f7" (UID: "83312dde-420c-46ad-b310-3a115fa347f7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.722604 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f","Type":"ContainerStarted","Data":"64118e43ae8aae73dd7f7eb2e522c645590b35cd16b3022f6fa019796ca1a920"} Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.722652 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f","Type":"ContainerStarted","Data":"16cebee142f268fe04df38cfbaae76c58d46b8923f696fcceac1e16ae43821bb"} Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.731340 5050 generic.go:334] "Generic (PLEG): container finished" podID="83312dde-420c-46ad-b310-3a115fa347f7" containerID="5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57" exitCode=0 Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.731369 5050 generic.go:334] "Generic (PLEG): container finished" podID="83312dde-420c-46ad-b310-3a115fa347f7" containerID="cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f" exitCode=2 Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.731378 5050 generic.go:334] "Generic (PLEG): container finished" podID="83312dde-420c-46ad-b310-3a115fa347f7" containerID="98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0" exitCode=0 Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.731386 5050 generic.go:334] "Generic (PLEG): container finished" podID="83312dde-420c-46ad-b310-3a115fa347f7" containerID="6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda" exitCode=0 Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.731405 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83312dde-420c-46ad-b310-3a115fa347f7","Type":"ContainerDied","Data":"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57"} Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.731433 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83312dde-420c-46ad-b310-3a115fa347f7","Type":"ContainerDied","Data":"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f"} Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.731430 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.731457 5050 scope.go:117] "RemoveContainer" containerID="5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.731446 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83312dde-420c-46ad-b310-3a115fa347f7","Type":"ContainerDied","Data":"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0"} Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.731649 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83312dde-420c-46ad-b310-3a115fa347f7","Type":"ContainerDied","Data":"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda"} Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.731660 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83312dde-420c-46ad-b310-3a115fa347f7","Type":"ContainerDied","Data":"399d659d2457e1d85bf46d39f1b69b2140a8b9e0decee54093a69e99ae7306e1"} Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.733271 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "83312dde-420c-46ad-b310-3a115fa347f7" (UID: "83312dde-420c-46ad-b310-3a115fa347f7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.752582 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcxtx\" (UniqueName: \"kubernetes.io/projected/83312dde-420c-46ad-b310-3a115fa347f7-kube-api-access-vcxtx\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.752609 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.752617 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83312dde-420c-46ad-b310-3a115fa347f7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.752627 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83312dde-420c-46ad-b310-3a115fa347f7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.752635 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.752643 5050 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.752885 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.752863212 podStartE2EDuration="2.752863212s" podCreationTimestamp="2026-01-31 05:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:43:38.742315685 +0000 UTC m=+1343.791477281" watchObservedRunningTime="2026-01-31 05:43:38.752863212 +0000 UTC m=+1343.802024818" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.758794 5050 scope.go:117] "RemoveContainer" containerID="cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.777344 5050 scope.go:117] "RemoveContainer" containerID="98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.778183 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83312dde-420c-46ad-b310-3a115fa347f7" (UID: "83312dde-420c-46ad-b310-3a115fa347f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.785222 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-config-data" (OuterVolumeSpecName: "config-data") pod "83312dde-420c-46ad-b310-3a115fa347f7" (UID: "83312dde-420c-46ad-b310-3a115fa347f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.799404 5050 scope.go:117] "RemoveContainer" containerID="6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.820896 5050 scope.go:117] "RemoveContainer" containerID="5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57" Jan 31 05:43:38 crc kubenswrapper[5050]: E0131 05:43:38.821407 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57\": container with ID starting with 5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57 not found: ID does not exist" containerID="5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.821472 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57"} err="failed to get container status \"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57\": rpc error: code = NotFound desc = could not find container \"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57\": container with ID starting with 5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57 not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.821505 5050 scope.go:117] "RemoveContainer" containerID="cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f" Jan 31 05:43:38 crc kubenswrapper[5050]: E0131 05:43:38.821848 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f\": container with ID starting with cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f not found: ID does not exist" containerID="cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.821896 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f"} err="failed to get container status \"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f\": rpc error: code = NotFound desc = could not find container \"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f\": container with ID starting with cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.821926 5050 scope.go:117] "RemoveContainer" containerID="98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0" Jan 31 05:43:38 crc kubenswrapper[5050]: E0131 05:43:38.822201 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0\": container with ID starting with 98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0 not found: ID does not exist" containerID="98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.822236 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0"} err="failed to get container status \"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0\": rpc error: code = NotFound desc = could not find container \"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0\": container with ID starting with 98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0 not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.822255 5050 scope.go:117] "RemoveContainer" containerID="6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda" Jan 31 05:43:38 crc kubenswrapper[5050]: E0131 05:43:38.822502 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda\": container with ID starting with 6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda not found: ID does not exist" containerID="6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.822537 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda"} err="failed to get container status \"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda\": rpc error: code = NotFound desc = could not find container \"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda\": container with ID starting with 6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.822556 5050 scope.go:117] "RemoveContainer" containerID="5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.822771 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57"} err="failed to get container status \"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57\": rpc error: code = NotFound desc = could not find container \"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57\": container with ID starting with 5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57 not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.822801 5050 scope.go:117] "RemoveContainer" containerID="cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.823484 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f"} err="failed to get container status \"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f\": rpc error: code = NotFound desc = could not find container \"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f\": container with ID starting with cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.823600 5050 scope.go:117] "RemoveContainer" containerID="98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.826017 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0"} err="failed to get container status \"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0\": rpc error: code = NotFound desc = could not find container \"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0\": container with ID starting with 98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0 not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.826047 5050 scope.go:117] "RemoveContainer" containerID="6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.826308 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda"} err="failed to get container status \"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda\": rpc error: code = NotFound desc = could not find container \"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda\": container with ID starting with 6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.826337 5050 scope.go:117] "RemoveContainer" containerID="5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.826664 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57"} err="failed to get container status \"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57\": rpc error: code = NotFound desc = could not find container \"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57\": container with ID starting with 5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57 not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.826689 5050 scope.go:117] "RemoveContainer" containerID="cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.826970 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f"} err="failed to get container status \"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f\": rpc error: code = NotFound desc = could not find container \"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f\": container with ID starting with cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.826993 5050 scope.go:117] "RemoveContainer" containerID="98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.827258 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0"} err="failed to get container status \"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0\": rpc error: code = NotFound desc = could not find container \"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0\": container with ID starting with 98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0 not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.827276 5050 scope.go:117] "RemoveContainer" containerID="6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.827447 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda"} err="failed to get container status \"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda\": rpc error: code = NotFound desc = could not find container \"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda\": container with ID starting with 6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.827466 5050 scope.go:117] "RemoveContainer" containerID="5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.827705 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57"} err="failed to get container status \"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57\": rpc error: code = NotFound desc = could not find container \"5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57\": container with ID starting with 5da710dbd506210038692af502dab511a724107e760b50500bb0a9036361ee57 not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.827739 5050 scope.go:117] "RemoveContainer" containerID="cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.827912 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f"} err="failed to get container status \"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f\": rpc error: code = NotFound desc = could not find container \"cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f\": container with ID starting with cbfb48797f4227d610d13c0e4c9575f5d9736d79ad2f086c530bd1cc93f57f3f not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.827935 5050 scope.go:117] "RemoveContainer" containerID="98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.828234 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0"} err="failed to get container status \"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0\": rpc error: code = NotFound desc = could not find container \"98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0\": container with ID starting with 98ac2ba0b66ef3b715ceb5cc44ff622ffab9f4377db7540d051e8ddac787a2a0 not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.828261 5050 scope.go:117] "RemoveContainer" containerID="6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.828638 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda"} err="failed to get container status \"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda\": rpc error: code = NotFound desc = could not find container \"6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda\": container with ID starting with 6a7638153e4fffb0941ebdd49efd669b50872835c5eb109879233b250f297eda not found: ID does not exist" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.856862 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:38 crc kubenswrapper[5050]: I0131 05:43:38.856999 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83312dde-420c-46ad-b310-3a115fa347f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.018250 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.018589 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.090681 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.098581 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.122305 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:43:39 crc kubenswrapper[5050]: E0131 05:43:39.123040 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="ceilometer-central-agent" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.123079 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="ceilometer-central-agent" Jan 31 05:43:39 crc kubenswrapper[5050]: E0131 05:43:39.123149 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="ceilometer-notification-agent" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.123168 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="ceilometer-notification-agent" Jan 31 05:43:39 crc kubenswrapper[5050]: E0131 05:43:39.123194 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="proxy-httpd" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.123213 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="proxy-httpd" Jan 31 05:43:39 crc kubenswrapper[5050]: E0131 05:43:39.123284 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="sg-core" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.123301 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="sg-core" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.123670 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="proxy-httpd" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.123725 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="sg-core" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.123760 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="ceilometer-central-agent" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.123791 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="83312dde-420c-46ad-b310-3a115fa347f7" containerName="ceilometer-notification-agent" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.126743 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.130752 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.131049 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.132394 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.135881 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.264150 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-log-httpd\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.264573 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-run-httpd\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.264650 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.264674 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgwzh\" (UniqueName: \"kubernetes.io/projected/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-kube-api-access-tgwzh\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.264693 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-scripts\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.264719 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.264774 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.264799 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-config-data\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.366975 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.367038 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgwzh\" (UniqueName: \"kubernetes.io/projected/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-kube-api-access-tgwzh\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.367061 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-scripts\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.367092 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.367171 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.367209 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-config-data\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.367266 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-log-httpd\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.367351 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-run-httpd\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.368055 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-run-httpd\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.368157 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-log-httpd\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.374593 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-config-data\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.375284 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-scripts\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.378134 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.378921 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.379153 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.402479 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgwzh\" (UniqueName: \"kubernetes.io/projected/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-kube-api-access-tgwzh\") pod \"ceilometer-0\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.507801 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:43:39 crc kubenswrapper[5050]: I0131 05:43:39.751611 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83312dde-420c-46ad-b310-3a115fa347f7" path="/var/lib/kubelet/pods/83312dde-420c-46ad-b310-3a115fa347f7/volumes" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.028069 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:43:40 crc kubenswrapper[5050]: W0131 05:43:40.043416 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c2fd8d9_2a70_45bd_a0bc_02638aa83992.slice/crio-851a2da23530ba56925ce4bda8fb96f3cba07a5f46c678ccabc9f9f36c4eac29 WatchSource:0}: Error finding container 851a2da23530ba56925ce4bda8fb96f3cba07a5f46c678ccabc9f9f36c4eac29: Status 404 returned error can't find the container with id 851a2da23530ba56925ce4bda8fb96f3cba07a5f46c678ccabc9f9f36c4eac29 Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.248355 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.399116 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-logs\") pod \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.399169 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-config-data\") pod \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.399337 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv7m2\" (UniqueName: \"kubernetes.io/projected/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-kube-api-access-fv7m2\") pod \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.399397 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-combined-ca-bundle\") pod \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\" (UID: \"0a0ddaee-6080-4cef-b5d5-a470496ba5d4\") " Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.400456 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-logs" (OuterVolumeSpecName: "logs") pod "0a0ddaee-6080-4cef-b5d5-a470496ba5d4" (UID: "0a0ddaee-6080-4cef-b5d5-a470496ba5d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.405420 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-kube-api-access-fv7m2" (OuterVolumeSpecName: "kube-api-access-fv7m2") pod "0a0ddaee-6080-4cef-b5d5-a470496ba5d4" (UID: "0a0ddaee-6080-4cef-b5d5-a470496ba5d4"). InnerVolumeSpecName "kube-api-access-fv7m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.443925 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a0ddaee-6080-4cef-b5d5-a470496ba5d4" (UID: "0a0ddaee-6080-4cef-b5d5-a470496ba5d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.470189 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-config-data" (OuterVolumeSpecName: "config-data") pod "0a0ddaee-6080-4cef-b5d5-a470496ba5d4" (UID: "0a0ddaee-6080-4cef-b5d5-a470496ba5d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.501931 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv7m2\" (UniqueName: \"kubernetes.io/projected/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-kube-api-access-fv7m2\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.501978 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.501988 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.501997 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0ddaee-6080-4cef-b5d5-a470496ba5d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.762418 5050 generic.go:334] "Generic (PLEG): container finished" podID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" containerID="189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7" exitCode=0 Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.762643 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a0ddaee-6080-4cef-b5d5-a470496ba5d4","Type":"ContainerDied","Data":"189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7"} Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.762741 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a0ddaee-6080-4cef-b5d5-a470496ba5d4","Type":"ContainerDied","Data":"8cbb6d75e88e828683bda5f7767e5291cc3a98913aa6b7e1b4614367c8919056"} Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.762763 5050 scope.go:117] "RemoveContainer" containerID="189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.764927 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.765227 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2fd8d9-2a70-45bd-a0bc-02638aa83992","Type":"ContainerStarted","Data":"a5c4ed3b579d84e9770e2b8bd8b68b9756a8b388edf140f586cd133278b394da"} Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.765250 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2fd8d9-2a70-45bd-a0bc-02638aa83992","Type":"ContainerStarted","Data":"851a2da23530ba56925ce4bda8fb96f3cba07a5f46c678ccabc9f9f36c4eac29"} Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.784383 5050 scope.go:117] "RemoveContainer" containerID="faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.811697 5050 scope.go:117] "RemoveContainer" containerID="189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.814497 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:40 crc kubenswrapper[5050]: E0131 05:43:40.816934 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7\": container with ID starting with 189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7 not found: ID does not exist" containerID="189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.816995 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7"} err="failed to get container status \"189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7\": rpc error: code = NotFound desc = could not find container \"189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7\": container with ID starting with 189ad8d38828765196b51c339bd11aba7e68bba401166505fc69aa385478a0c7 not found: ID does not exist" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.817025 5050 scope.go:117] "RemoveContainer" containerID="faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3" Jan 31 05:43:40 crc kubenswrapper[5050]: E0131 05:43:40.822760 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3\": container with ID starting with faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3 not found: ID does not exist" containerID="faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.822803 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3"} err="failed to get container status \"faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3\": rpc error: code = NotFound desc = could not find container \"faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3\": container with ID starting with faefd57e6c46415fe00962456d8cf3cea6a2ba3f169b68cb6f835d3289c5a8c3 not found: ID does not exist" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.826719 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.835264 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:40 crc kubenswrapper[5050]: E0131 05:43:40.835675 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" containerName="nova-api-api" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.835691 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" containerName="nova-api-api" Jan 31 05:43:40 crc kubenswrapper[5050]: E0131 05:43:40.835722 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" containerName="nova-api-log" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.835729 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" containerName="nova-api-log" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.835893 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" containerName="nova-api-api" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.835919 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" containerName="nova-api-log" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.836813 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.853992 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.854076 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.854182 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.877411 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.915896 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-logs\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.916171 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.916243 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvjcl\" (UniqueName: \"kubernetes.io/projected/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-kube-api-access-gvjcl\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.916335 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-config-data\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.916455 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-public-tls-certs\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:40 crc kubenswrapper[5050]: I0131 05:43:40.916554 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-internal-tls-certs\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.018654 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-logs\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.018759 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.018786 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvjcl\" (UniqueName: \"kubernetes.io/projected/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-kube-api-access-gvjcl\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.018841 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-config-data\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.018907 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-public-tls-certs\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.018930 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-internal-tls-certs\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.019181 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-logs\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.025909 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-config-data\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.028693 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.029337 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-internal-tls-certs\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.034311 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-public-tls-certs\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.039717 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvjcl\" (UniqueName: \"kubernetes.io/projected/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-kube-api-access-gvjcl\") pod \"nova-api-0\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.193937 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:43:41 crc kubenswrapper[5050]: W0131 05:43:41.648977 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71ef7c46_e0f1_47e0_bfb5_95e4b9a008ab.slice/crio-7cbcc5fc1dcddea2d7c33a3797ba620537a44768a6ecc4e4acd2cc33ca4cae75 WatchSource:0}: Error finding container 7cbcc5fc1dcddea2d7c33a3797ba620537a44768a6ecc4e4acd2cc33ca4cae75: Status 404 returned error can't find the container with id 7cbcc5fc1dcddea2d7c33a3797ba620537a44768a6ecc4e4acd2cc33ca4cae75 Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.650233 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.746540 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0ddaee-6080-4cef-b5d5-a470496ba5d4" path="/var/lib/kubelet/pods/0a0ddaee-6080-4cef-b5d5-a470496ba5d4/volumes" Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.776091 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab","Type":"ContainerStarted","Data":"7cbcc5fc1dcddea2d7c33a3797ba620537a44768a6ecc4e4acd2cc33ca4cae75"} Jan 31 05:43:41 crc kubenswrapper[5050]: I0131 05:43:41.781038 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2fd8d9-2a70-45bd-a0bc-02638aa83992","Type":"ContainerStarted","Data":"81ac2632b609615b7cf8f0187811df728517e6c5f5ffff109bbad6d55ce84122"} Jan 31 05:43:42 crc kubenswrapper[5050]: I0131 05:43:42.295167 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:42 crc kubenswrapper[5050]: I0131 05:43:42.793222 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2fd8d9-2a70-45bd-a0bc-02638aa83992","Type":"ContainerStarted","Data":"ca9d2e6875b4832aec78487e11aed648cef2a19f2ef7a51a981f28bbb5201fc5"} Jan 31 05:43:42 crc kubenswrapper[5050]: I0131 05:43:42.795227 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab","Type":"ContainerStarted","Data":"6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d"} Jan 31 05:43:42 crc kubenswrapper[5050]: I0131 05:43:42.795266 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab","Type":"ContainerStarted","Data":"fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1"} Jan 31 05:43:42 crc kubenswrapper[5050]: I0131 05:43:42.811266 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.811248523 podStartE2EDuration="2.811248523s" podCreationTimestamp="2026-01-31 05:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:43:42.80929955 +0000 UTC m=+1347.858461156" watchObservedRunningTime="2026-01-31 05:43:42.811248523 +0000 UTC m=+1347.860410119" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.147092 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.220489 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-t5fw8"] Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.220750 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" podUID="f145abf7-672b-48e2-80e6-52fdae845626" containerName="dnsmasq-dns" containerID="cri-o://343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20" gracePeriod=10 Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.777328 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.814968 5050 generic.go:334] "Generic (PLEG): container finished" podID="f145abf7-672b-48e2-80e6-52fdae845626" containerID="343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20" exitCode=0 Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.815020 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" event={"ID":"f145abf7-672b-48e2-80e6-52fdae845626","Type":"ContainerDied","Data":"343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20"} Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.815045 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" event={"ID":"f145abf7-672b-48e2-80e6-52fdae845626","Type":"ContainerDied","Data":"fadfa896760b65aecab82e943c2ef53fc200bc4b3f788731e60b80afecf6f9ed"} Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.815063 5050 scope.go:117] "RemoveContainer" containerID="343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.815161 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-t5fw8" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.820091 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2fd8d9-2a70-45bd-a0bc-02638aa83992","Type":"ContainerStarted","Data":"0c5041d5bdc281e96a441e39025a784e965100cf5a1c16af9bb04202a2f680d1"} Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.820793 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.842569 5050 scope.go:117] "RemoveContainer" containerID="a8876be31c60ab68fa01518923f17ac17e7d6301224292a6244c608d88d7a6d0" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.863196 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.972284319 podStartE2EDuration="5.863176733s" podCreationTimestamp="2026-01-31 05:43:39 +0000 UTC" firstStartedPulling="2026-01-31 05:43:40.046009056 +0000 UTC m=+1345.095170652" lastFinishedPulling="2026-01-31 05:43:43.93690146 +0000 UTC m=+1348.986063066" observedRunningTime="2026-01-31 05:43:44.855153834 +0000 UTC m=+1349.904315430" watchObservedRunningTime="2026-01-31 05:43:44.863176733 +0000 UTC m=+1349.912338319" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.881026 5050 scope.go:117] "RemoveContainer" containerID="343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20" Jan 31 05:43:44 crc kubenswrapper[5050]: E0131 05:43:44.881486 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20\": container with ID starting with 343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20 not found: ID does not exist" containerID="343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.881521 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20"} err="failed to get container status \"343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20\": rpc error: code = NotFound desc = could not find container \"343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20\": container with ID starting with 343984be0058697167af07292f4594ed8f0517610a2e96e226d306707a92bb20 not found: ID does not exist" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.881547 5050 scope.go:117] "RemoveContainer" containerID="a8876be31c60ab68fa01518923f17ac17e7d6301224292a6244c608d88d7a6d0" Jan 31 05:43:44 crc kubenswrapper[5050]: E0131 05:43:44.881836 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8876be31c60ab68fa01518923f17ac17e7d6301224292a6244c608d88d7a6d0\": container with ID starting with a8876be31c60ab68fa01518923f17ac17e7d6301224292a6244c608d88d7a6d0 not found: ID does not exist" containerID="a8876be31c60ab68fa01518923f17ac17e7d6301224292a6244c608d88d7a6d0" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.881881 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8876be31c60ab68fa01518923f17ac17e7d6301224292a6244c608d88d7a6d0"} err="failed to get container status \"a8876be31c60ab68fa01518923f17ac17e7d6301224292a6244c608d88d7a6d0\": rpc error: code = NotFound desc = could not find container \"a8876be31c60ab68fa01518923f17ac17e7d6301224292a6244c608d88d7a6d0\": container with ID starting with a8876be31c60ab68fa01518923f17ac17e7d6301224292a6244c608d88d7a6d0 not found: ID does not exist" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.928189 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jdlv\" (UniqueName: \"kubernetes.io/projected/f145abf7-672b-48e2-80e6-52fdae845626-kube-api-access-5jdlv\") pod \"f145abf7-672b-48e2-80e6-52fdae845626\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.928313 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-dns-svc\") pod \"f145abf7-672b-48e2-80e6-52fdae845626\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.928332 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-ovsdbserver-sb\") pod \"f145abf7-672b-48e2-80e6-52fdae845626\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.928365 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-ovsdbserver-nb\") pod \"f145abf7-672b-48e2-80e6-52fdae845626\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.928391 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-config\") pod \"f145abf7-672b-48e2-80e6-52fdae845626\" (UID: \"f145abf7-672b-48e2-80e6-52fdae845626\") " Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.933381 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f145abf7-672b-48e2-80e6-52fdae845626-kube-api-access-5jdlv" (OuterVolumeSpecName: "kube-api-access-5jdlv") pod "f145abf7-672b-48e2-80e6-52fdae845626" (UID: "f145abf7-672b-48e2-80e6-52fdae845626"). InnerVolumeSpecName "kube-api-access-5jdlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.970790 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f145abf7-672b-48e2-80e6-52fdae845626" (UID: "f145abf7-672b-48e2-80e6-52fdae845626"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.970833 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f145abf7-672b-48e2-80e6-52fdae845626" (UID: "f145abf7-672b-48e2-80e6-52fdae845626"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.978615 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f145abf7-672b-48e2-80e6-52fdae845626" (UID: "f145abf7-672b-48e2-80e6-52fdae845626"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:43:44 crc kubenswrapper[5050]: I0131 05:43:44.987653 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-config" (OuterVolumeSpecName: "config") pod "f145abf7-672b-48e2-80e6-52fdae845626" (UID: "f145abf7-672b-48e2-80e6-52fdae845626"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:43:45 crc kubenswrapper[5050]: I0131 05:43:45.030868 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:45 crc kubenswrapper[5050]: I0131 05:43:45.030901 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:45 crc kubenswrapper[5050]: I0131 05:43:45.030912 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:45 crc kubenswrapper[5050]: I0131 05:43:45.030920 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f145abf7-672b-48e2-80e6-52fdae845626-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:45 crc kubenswrapper[5050]: I0131 05:43:45.030930 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jdlv\" (UniqueName: \"kubernetes.io/projected/f145abf7-672b-48e2-80e6-52fdae845626-kube-api-access-5jdlv\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:45 crc kubenswrapper[5050]: I0131 05:43:45.159019 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-t5fw8"] Jan 31 05:43:45 crc kubenswrapper[5050]: I0131 05:43:45.168426 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-t5fw8"] Jan 31 05:43:45 crc kubenswrapper[5050]: I0131 05:43:45.754540 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f145abf7-672b-48e2-80e6-52fdae845626" path="/var/lib/kubelet/pods/f145abf7-672b-48e2-80e6-52fdae845626/volumes" Jan 31 05:43:47 crc kubenswrapper[5050]: I0131 05:43:47.295622 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:47 crc kubenswrapper[5050]: I0131 05:43:47.318416 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:47 crc kubenswrapper[5050]: I0131 05:43:47.886033 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.097373 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-blxjh"] Jan 31 05:43:48 crc kubenswrapper[5050]: E0131 05:43:48.098346 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f145abf7-672b-48e2-80e6-52fdae845626" containerName="dnsmasq-dns" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.098380 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f145abf7-672b-48e2-80e6-52fdae845626" containerName="dnsmasq-dns" Jan 31 05:43:48 crc kubenswrapper[5050]: E0131 05:43:48.098414 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f145abf7-672b-48e2-80e6-52fdae845626" containerName="init" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.098428 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f145abf7-672b-48e2-80e6-52fdae845626" containerName="init" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.098752 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f145abf7-672b-48e2-80e6-52fdae845626" containerName="dnsmasq-dns" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.099532 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.104143 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.104648 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.108045 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-blxjh"] Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.192579 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-config-data\") pod \"nova-cell1-cell-mapping-blxjh\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.192760 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-blxjh\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.192797 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-scripts\") pod \"nova-cell1-cell-mapping-blxjh\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.193109 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqm6k\" (UniqueName: \"kubernetes.io/projected/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-kube-api-access-sqm6k\") pod \"nova-cell1-cell-mapping-blxjh\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.296313 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-config-data\") pod \"nova-cell1-cell-mapping-blxjh\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.296537 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-blxjh\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.296574 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-scripts\") pod \"nova-cell1-cell-mapping-blxjh\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.296639 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqm6k\" (UniqueName: \"kubernetes.io/projected/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-kube-api-access-sqm6k\") pod \"nova-cell1-cell-mapping-blxjh\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.307601 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-blxjh\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.307710 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-config-data\") pod \"nova-cell1-cell-mapping-blxjh\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.310401 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-scripts\") pod \"nova-cell1-cell-mapping-blxjh\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.324986 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqm6k\" (UniqueName: \"kubernetes.io/projected/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-kube-api-access-sqm6k\") pod \"nova-cell1-cell-mapping-blxjh\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.439338 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:48 crc kubenswrapper[5050]: I0131 05:43:48.931875 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-blxjh"] Jan 31 05:43:48 crc kubenswrapper[5050]: W0131 05:43:48.948084 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb2ce56a_f67a_4dbc_9bc5_e5ba11a1843c.slice/crio-b4616a1dfbf932e5a2e4f8e18bbc3d063a40c8e85536fbc3cfed0114e36404ab WatchSource:0}: Error finding container b4616a1dfbf932e5a2e4f8e18bbc3d063a40c8e85536fbc3cfed0114e36404ab: Status 404 returned error can't find the container with id b4616a1dfbf932e5a2e4f8e18bbc3d063a40c8e85536fbc3cfed0114e36404ab Jan 31 05:43:49 crc kubenswrapper[5050]: I0131 05:43:49.879730 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-blxjh" event={"ID":"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c","Type":"ContainerStarted","Data":"371421fc890f818595e0cb15e8837631374bb17d76260f0f01a3c8f2a2f4956a"} Jan 31 05:43:49 crc kubenswrapper[5050]: I0131 05:43:49.880788 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-blxjh" event={"ID":"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c","Type":"ContainerStarted","Data":"b4616a1dfbf932e5a2e4f8e18bbc3d063a40c8e85536fbc3cfed0114e36404ab"} Jan 31 05:43:49 crc kubenswrapper[5050]: I0131 05:43:49.911450 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-blxjh" podStartSLOduration=1.911432226 podStartE2EDuration="1.911432226s" podCreationTimestamp="2026-01-31 05:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:43:49.908562198 +0000 UTC m=+1354.957723844" watchObservedRunningTime="2026-01-31 05:43:49.911432226 +0000 UTC m=+1354.960593822" Jan 31 05:43:51 crc kubenswrapper[5050]: I0131 05:43:51.194652 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 05:43:51 crc kubenswrapper[5050]: I0131 05:43:51.194716 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 05:43:52 crc kubenswrapper[5050]: I0131 05:43:52.208164 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:43:52 crc kubenswrapper[5050]: I0131 05:43:52.208508 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:43:53 crc kubenswrapper[5050]: I0131 05:43:53.952542 5050 generic.go:334] "Generic (PLEG): container finished" podID="fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c" containerID="371421fc890f818595e0cb15e8837631374bb17d76260f0f01a3c8f2a2f4956a" exitCode=0 Jan 31 05:43:53 crc kubenswrapper[5050]: I0131 05:43:53.952593 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-blxjh" event={"ID":"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c","Type":"ContainerDied","Data":"371421fc890f818595e0cb15e8837631374bb17d76260f0f01a3c8f2a2f4956a"} Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.336806 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.433602 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqm6k\" (UniqueName: \"kubernetes.io/projected/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-kube-api-access-sqm6k\") pod \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.433683 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-scripts\") pod \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.435142 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-combined-ca-bundle\") pod \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.435292 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-config-data\") pod \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\" (UID: \"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c\") " Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.441754 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-scripts" (OuterVolumeSpecName: "scripts") pod "fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c" (UID: "fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.441769 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-kube-api-access-sqm6k" (OuterVolumeSpecName: "kube-api-access-sqm6k") pod "fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c" (UID: "fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c"). InnerVolumeSpecName "kube-api-access-sqm6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.474394 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c" (UID: "fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.481942 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-config-data" (OuterVolumeSpecName: "config-data") pod "fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c" (UID: "fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.538543 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqm6k\" (UniqueName: \"kubernetes.io/projected/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-kube-api-access-sqm6k\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.538599 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.538627 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.538651 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.973356 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-blxjh" event={"ID":"fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c","Type":"ContainerDied","Data":"b4616a1dfbf932e5a2e4f8e18bbc3d063a40c8e85536fbc3cfed0114e36404ab"} Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.973414 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4616a1dfbf932e5a2e4f8e18bbc3d063a40c8e85536fbc3cfed0114e36404ab" Jan 31 05:43:55 crc kubenswrapper[5050]: I0131 05:43:55.973446 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-blxjh" Jan 31 05:43:56 crc kubenswrapper[5050]: I0131 05:43:56.182612 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:56 crc kubenswrapper[5050]: I0131 05:43:56.182922 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bbdeb55b-9447-49ac-ac5e-6264ac54cb35" containerName="nova-scheduler-scheduler" containerID="cri-o://96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b" gracePeriod=30 Jan 31 05:43:56 crc kubenswrapper[5050]: I0131 05:43:56.196309 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:43:56 crc kubenswrapper[5050]: I0131 05:43:56.196660 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" containerName="nova-api-log" containerID="cri-o://fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1" gracePeriod=30 Jan 31 05:43:56 crc kubenswrapper[5050]: I0131 05:43:56.196826 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" containerName="nova-api-api" containerID="cri-o://6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d" gracePeriod=30 Jan 31 05:43:56 crc kubenswrapper[5050]: I0131 05:43:56.233369 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:43:56 crc kubenswrapper[5050]: I0131 05:43:56.233660 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerName="nova-metadata-log" containerID="cri-o://c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5" gracePeriod=30 Jan 31 05:43:56 crc kubenswrapper[5050]: I0131 05:43:56.233811 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerName="nova-metadata-metadata" containerID="cri-o://1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87" gracePeriod=30 Jan 31 05:43:56 crc kubenswrapper[5050]: I0131 05:43:56.991537 5050 generic.go:334] "Generic (PLEG): container finished" podID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" containerID="fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1" exitCode=143 Jan 31 05:43:56 crc kubenswrapper[5050]: I0131 05:43:56.991630 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab","Type":"ContainerDied","Data":"fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1"} Jan 31 05:43:56 crc kubenswrapper[5050]: I0131 05:43:56.996325 5050 generic.go:334] "Generic (PLEG): container finished" podID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerID="c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5" exitCode=143 Jan 31 05:43:56 crc kubenswrapper[5050]: I0131 05:43:56.996362 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9cb357-6854-412f-8fe6-d7c4404ecbc9","Type":"ContainerDied","Data":"c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5"} Jan 31 05:43:58 crc kubenswrapper[5050]: I0131 05:43:58.538915 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:43:58 crc kubenswrapper[5050]: I0131 05:43:58.600336 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-combined-ca-bundle\") pod \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\" (UID: \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\") " Jan 31 05:43:58 crc kubenswrapper[5050]: I0131 05:43:58.600442 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-427v6\" (UniqueName: \"kubernetes.io/projected/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-kube-api-access-427v6\") pod \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\" (UID: \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\") " Jan 31 05:43:58 crc kubenswrapper[5050]: I0131 05:43:58.600484 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-config-data\") pod \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\" (UID: \"bbdeb55b-9447-49ac-ac5e-6264ac54cb35\") " Jan 31 05:43:58 crc kubenswrapper[5050]: I0131 05:43:58.624900 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-kube-api-access-427v6" (OuterVolumeSpecName: "kube-api-access-427v6") pod "bbdeb55b-9447-49ac-ac5e-6264ac54cb35" (UID: "bbdeb55b-9447-49ac-ac5e-6264ac54cb35"). InnerVolumeSpecName "kube-api-access-427v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:58 crc kubenswrapper[5050]: I0131 05:43:58.640752 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-config-data" (OuterVolumeSpecName: "config-data") pod "bbdeb55b-9447-49ac-ac5e-6264ac54cb35" (UID: "bbdeb55b-9447-49ac-ac5e-6264ac54cb35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:58 crc kubenswrapper[5050]: I0131 05:43:58.655170 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbdeb55b-9447-49ac-ac5e-6264ac54cb35" (UID: "bbdeb55b-9447-49ac-ac5e-6264ac54cb35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:58 crc kubenswrapper[5050]: I0131 05:43:58.703811 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:58 crc kubenswrapper[5050]: I0131 05:43:58.703873 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-427v6\" (UniqueName: \"kubernetes.io/projected/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-kube-api-access-427v6\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:58 crc kubenswrapper[5050]: I0131 05:43:58.703890 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdeb55b-9447-49ac-ac5e-6264ac54cb35-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.019870 5050 generic.go:334] "Generic (PLEG): container finished" podID="bbdeb55b-9447-49ac-ac5e-6264ac54cb35" containerID="96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b" exitCode=0 Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.019972 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.019977 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bbdeb55b-9447-49ac-ac5e-6264ac54cb35","Type":"ContainerDied","Data":"96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b"} Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.020505 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bbdeb55b-9447-49ac-ac5e-6264ac54cb35","Type":"ContainerDied","Data":"60759327b396bdf0b2271f0d06c81e9c89087f0756c670a9d3e105bf5824d1fb"} Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.020538 5050 scope.go:117] "RemoveContainer" containerID="96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.061083 5050 scope.go:117] "RemoveContainer" containerID="96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b" Jan 31 05:43:59 crc kubenswrapper[5050]: E0131 05:43:59.061846 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b\": container with ID starting with 96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b not found: ID does not exist" containerID="96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.061902 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b"} err="failed to get container status \"96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b\": rpc error: code = NotFound desc = could not find container \"96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b\": container with ID starting with 96dc1646498f10b307cf0da67ca43d21790f4f2efbb952bf2f22c54e0569fe2b not found: ID does not exist" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.068168 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.082194 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.092094 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:59 crc kubenswrapper[5050]: E0131 05:43:59.092679 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c" containerName="nova-manage" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.092712 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c" containerName="nova-manage" Jan 31 05:43:59 crc kubenswrapper[5050]: E0131 05:43:59.092783 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbdeb55b-9447-49ac-ac5e-6264ac54cb35" containerName="nova-scheduler-scheduler" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.092799 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbdeb55b-9447-49ac-ac5e-6264ac54cb35" containerName="nova-scheduler-scheduler" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.093154 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbdeb55b-9447-49ac-ac5e-6264ac54cb35" containerName="nova-scheduler-scheduler" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.093198 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c" containerName="nova-manage" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.094340 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.096573 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.111287 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.213354 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27e3698-4326-47f3-bda5-3f3d44d551a9-config-data\") pod \"nova-scheduler-0\" (UID: \"a27e3698-4326-47f3-bda5-3f3d44d551a9\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.213733 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6nz\" (UniqueName: \"kubernetes.io/projected/a27e3698-4326-47f3-bda5-3f3d44d551a9-kube-api-access-zm6nz\") pod \"nova-scheduler-0\" (UID: \"a27e3698-4326-47f3-bda5-3f3d44d551a9\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.214171 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27e3698-4326-47f3-bda5-3f3d44d551a9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a27e3698-4326-47f3-bda5-3f3d44d551a9\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.316234 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27e3698-4326-47f3-bda5-3f3d44d551a9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a27e3698-4326-47f3-bda5-3f3d44d551a9\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.316347 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27e3698-4326-47f3-bda5-3f3d44d551a9-config-data\") pod \"nova-scheduler-0\" (UID: \"a27e3698-4326-47f3-bda5-3f3d44d551a9\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.316376 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm6nz\" (UniqueName: \"kubernetes.io/projected/a27e3698-4326-47f3-bda5-3f3d44d551a9-kube-api-access-zm6nz\") pod \"nova-scheduler-0\" (UID: \"a27e3698-4326-47f3-bda5-3f3d44d551a9\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.323119 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27e3698-4326-47f3-bda5-3f3d44d551a9-config-data\") pod \"nova-scheduler-0\" (UID: \"a27e3698-4326-47f3-bda5-3f3d44d551a9\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.324061 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27e3698-4326-47f3-bda5-3f3d44d551a9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a27e3698-4326-47f3-bda5-3f3d44d551a9\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.336737 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm6nz\" (UniqueName: \"kubernetes.io/projected/a27e3698-4326-47f3-bda5-3f3d44d551a9-kube-api-access-zm6nz\") pod \"nova-scheduler-0\" (UID: \"a27e3698-4326-47f3-bda5-3f3d44d551a9\") " pod="openstack/nova-scheduler-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.364207 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": read tcp 10.217.0.2:33656->10.217.0.179:8775: read: connection reset by peer" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.364376 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": read tcp 10.217.0.2:33666->10.217.0.179:8775: read: connection reset by peer" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.429140 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.750167 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbdeb55b-9447-49ac-ac5e-6264ac54cb35" path="/var/lib/kubelet/pods/bbdeb55b-9447-49ac-ac5e-6264ac54cb35/volumes" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.774238 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.827514 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-internal-tls-certs\") pod \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.827577 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-config-data\") pod \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.827657 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-combined-ca-bundle\") pod \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.827731 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvjcl\" (UniqueName: \"kubernetes.io/projected/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-kube-api-access-gvjcl\") pod \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.827833 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-logs\") pod \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.827871 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-public-tls-certs\") pod \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\" (UID: \"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab\") " Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.830406 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-logs" (OuterVolumeSpecName: "logs") pod "71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" (UID: "71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.833185 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-kube-api-access-gvjcl" (OuterVolumeSpecName: "kube-api-access-gvjcl") pod "71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" (UID: "71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab"). InnerVolumeSpecName "kube-api-access-gvjcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.849045 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.877629 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" (UID: "71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.886178 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-config-data" (OuterVolumeSpecName: "config-data") pod "71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" (UID: "71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.900895 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" (UID: "71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.910118 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" (UID: "71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.930209 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-config-data\") pod \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.930338 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-nova-metadata-tls-certs\") pod \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.930487 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-combined-ca-bundle\") pod \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.930527 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-logs\") pod \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.930573 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86nph\" (UniqueName: \"kubernetes.io/projected/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-kube-api-access-86nph\") pod \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\" (UID: \"2a9cb357-6854-412f-8fe6-d7c4404ecbc9\") " Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.930907 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvjcl\" (UniqueName: \"kubernetes.io/projected/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-kube-api-access-gvjcl\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.930923 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.930933 5050 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.930941 5050 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.930953 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.930961 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.931335 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-logs" (OuterVolumeSpecName: "logs") pod "2a9cb357-6854-412f-8fe6-d7c4404ecbc9" (UID: "2a9cb357-6854-412f-8fe6-d7c4404ecbc9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.933776 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-kube-api-access-86nph" (OuterVolumeSpecName: "kube-api-access-86nph") pod "2a9cb357-6854-412f-8fe6-d7c4404ecbc9" (UID: "2a9cb357-6854-412f-8fe6-d7c4404ecbc9"). InnerVolumeSpecName "kube-api-access-86nph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.960305 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-config-data" (OuterVolumeSpecName: "config-data") pod "2a9cb357-6854-412f-8fe6-d7c4404ecbc9" (UID: "2a9cb357-6854-412f-8fe6-d7c4404ecbc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.962110 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a9cb357-6854-412f-8fe6-d7c4404ecbc9" (UID: "2a9cb357-6854-412f-8fe6-d7c4404ecbc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:43:59 crc kubenswrapper[5050]: I0131 05:43:59.983745 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2a9cb357-6854-412f-8fe6-d7c4404ecbc9" (UID: "2a9cb357-6854-412f-8fe6-d7c4404ecbc9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.009102 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.030152 5050 generic.go:334] "Generic (PLEG): container finished" podID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerID="1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87" exitCode=0 Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.030210 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.030246 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9cb357-6854-412f-8fe6-d7c4404ecbc9","Type":"ContainerDied","Data":"1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87"} Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.030294 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9cb357-6854-412f-8fe6-d7c4404ecbc9","Type":"ContainerDied","Data":"0098a03baa2fb01c110b51833feea00cf681754187e8967198b0d0b345552748"} Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.030310 5050 scope.go:117] "RemoveContainer" containerID="1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.032207 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86nph\" (UniqueName: \"kubernetes.io/projected/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-kube-api-access-86nph\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.032232 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.032242 5050 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.032250 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.032258 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9cb357-6854-412f-8fe6-d7c4404ecbc9-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.045880 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a27e3698-4326-47f3-bda5-3f3d44d551a9","Type":"ContainerStarted","Data":"45564336b0ae3d94064ae8861deec172f78ee875590d2dc8b83ff8c1f810cbf7"} Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.053766 5050 generic.go:334] "Generic (PLEG): container finished" podID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" containerID="6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d" exitCode=0 Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.053804 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab","Type":"ContainerDied","Data":"6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d"} Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.053826 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab","Type":"ContainerDied","Data":"7cbcc5fc1dcddea2d7c33a3797ba620537a44768a6ecc4e4acd2cc33ca4cae75"} Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.053873 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.072308 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.081688 5050 scope.go:117] "RemoveContainer" containerID="c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.084343 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.117810 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.141100 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:44:00 crc kubenswrapper[5050]: E0131 05:44:00.141482 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerName="nova-metadata-log" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.141495 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerName="nova-metadata-log" Jan 31 05:44:00 crc kubenswrapper[5050]: E0131 05:44:00.141519 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" containerName="nova-api-api" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.141531 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" containerName="nova-api-api" Jan 31 05:44:00 crc kubenswrapper[5050]: E0131 05:44:00.141545 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" containerName="nova-api-log" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.141552 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" containerName="nova-api-log" Jan 31 05:44:00 crc kubenswrapper[5050]: E0131 05:44:00.141569 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerName="nova-metadata-metadata" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.141574 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerName="nova-metadata-metadata" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.141724 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" containerName="nova-api-log" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.141743 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerName="nova-metadata-log" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.141751 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" containerName="nova-api-api" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.141761 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" containerName="nova-metadata-metadata" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.142821 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.149681 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.149938 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.150763 5050 scope.go:117] "RemoveContainer" containerID="1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.151013 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:44:00 crc kubenswrapper[5050]: E0131 05:44:00.152558 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87\": container with ID starting with 1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87 not found: ID does not exist" containerID="1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.152600 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87"} err="failed to get container status \"1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87\": rpc error: code = NotFound desc = could not find container \"1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87\": container with ID starting with 1ed15b8d61e4e87ad88c62e66d398f3303dd5094c0b4522f1386017fcf96ba87 not found: ID does not exist" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.152624 5050 scope.go:117] "RemoveContainer" containerID="c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5" Jan 31 05:44:00 crc kubenswrapper[5050]: E0131 05:44:00.152847 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5\": container with ID starting with c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5 not found: ID does not exist" containerID="c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.152873 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5"} err="failed to get container status \"c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5\": rpc error: code = NotFound desc = could not find container \"c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5\": container with ID starting with c305c557888b15915f7a9c3c170e15c7a6621207b720a8dcf749fafc3adfcdf5 not found: ID does not exist" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.152891 5050 scope.go:117] "RemoveContainer" containerID="6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.162216 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.178330 5050 scope.go:117] "RemoveContainer" containerID="fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.179301 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.181333 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.184793 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.184994 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.188470 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.189946 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.201341 5050 scope.go:117] "RemoveContainer" containerID="6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d" Jan 31 05:44:00 crc kubenswrapper[5050]: E0131 05:44:00.203120 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d\": container with ID starting with 6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d not found: ID does not exist" containerID="6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.203154 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d"} err="failed to get container status \"6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d\": rpc error: code = NotFound desc = could not find container \"6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d\": container with ID starting with 6f900f14468373f81373fbd36e7129128754425dc8f13a50579579279b3d080d not found: ID does not exist" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.203175 5050 scope.go:117] "RemoveContainer" containerID="fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1" Jan 31 05:44:00 crc kubenswrapper[5050]: E0131 05:44:00.203352 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1\": container with ID starting with fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1 not found: ID does not exist" containerID="fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.203376 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1"} err="failed to get container status \"fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1\": rpc error: code = NotFound desc = could not find container \"fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1\": container with ID starting with fa74d6e3475777f20b38a98ff7609d9a1ae3e29d797340d7babd5634fd7117c1 not found: ID does not exist" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.235572 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/334fee19-d725-4b1f-85f2-03d26fa6e09e-public-tls-certs\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.235628 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-config-data\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.235646 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-logs\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.235787 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwzbz\" (UniqueName: \"kubernetes.io/projected/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-kube-api-access-qwzbz\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.235848 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.235903 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334fee19-d725-4b1f-85f2-03d26fa6e09e-config-data\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.235950 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lfc6\" (UniqueName: \"kubernetes.io/projected/334fee19-d725-4b1f-85f2-03d26fa6e09e-kube-api-access-7lfc6\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.236011 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334fee19-d725-4b1f-85f2-03d26fa6e09e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.236282 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.236351 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/334fee19-d725-4b1f-85f2-03d26fa6e09e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.236431 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334fee19-d725-4b1f-85f2-03d26fa6e09e-logs\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.338441 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.338834 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/334fee19-d725-4b1f-85f2-03d26fa6e09e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.338902 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334fee19-d725-4b1f-85f2-03d26fa6e09e-logs\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.339035 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/334fee19-d725-4b1f-85f2-03d26fa6e09e-public-tls-certs\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.339072 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-config-data\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.339100 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-logs\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.339154 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwzbz\" (UniqueName: \"kubernetes.io/projected/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-kube-api-access-qwzbz\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.339194 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.339234 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334fee19-d725-4b1f-85f2-03d26fa6e09e-config-data\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.339279 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lfc6\" (UniqueName: \"kubernetes.io/projected/334fee19-d725-4b1f-85f2-03d26fa6e09e-kube-api-access-7lfc6\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.339338 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334fee19-d725-4b1f-85f2-03d26fa6e09e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.340703 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334fee19-d725-4b1f-85f2-03d26fa6e09e-logs\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.340876 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-logs\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.342147 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/334fee19-d725-4b1f-85f2-03d26fa6e09e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.342798 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/334fee19-d725-4b1f-85f2-03d26fa6e09e-public-tls-certs\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.344301 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-config-data\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.344543 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334fee19-d725-4b1f-85f2-03d26fa6e09e-config-data\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.345800 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334fee19-d725-4b1f-85f2-03d26fa6e09e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.347069 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.347371 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.359477 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwzbz\" (UniqueName: \"kubernetes.io/projected/70ac4fa7-405d-4fc5-b6eb-46774c40cbec-kube-api-access-qwzbz\") pod \"nova-metadata-0\" (UID: \"70ac4fa7-405d-4fc5-b6eb-46774c40cbec\") " pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.363632 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lfc6\" (UniqueName: \"kubernetes.io/projected/334fee19-d725-4b1f-85f2-03d26fa6e09e-kube-api-access-7lfc6\") pod \"nova-api-0\" (UID: \"334fee19-d725-4b1f-85f2-03d26fa6e09e\") " pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.466951 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.505912 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.927505 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:44:00 crc kubenswrapper[5050]: I0131 05:44:00.940807 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:44:00 crc kubenswrapper[5050]: W0131 05:44:00.952423 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70ac4fa7_405d_4fc5_b6eb_46774c40cbec.slice/crio-132d08563f774361b64962c1937b9dd525b36bfcf18891f012e7c35f8a1f243e WatchSource:0}: Error finding container 132d08563f774361b64962c1937b9dd525b36bfcf18891f012e7c35f8a1f243e: Status 404 returned error can't find the container with id 132d08563f774361b64962c1937b9dd525b36bfcf18891f012e7c35f8a1f243e Jan 31 05:44:01 crc kubenswrapper[5050]: I0131 05:44:01.063245 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70ac4fa7-405d-4fc5-b6eb-46774c40cbec","Type":"ContainerStarted","Data":"132d08563f774361b64962c1937b9dd525b36bfcf18891f012e7c35f8a1f243e"} Jan 31 05:44:01 crc kubenswrapper[5050]: I0131 05:44:01.065530 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a27e3698-4326-47f3-bda5-3f3d44d551a9","Type":"ContainerStarted","Data":"93f403ae41b284f238ffeb636ad1284e16b19f335064c27b827699195013c464"} Jan 31 05:44:01 crc kubenswrapper[5050]: I0131 05:44:01.071642 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"334fee19-d725-4b1f-85f2-03d26fa6e09e","Type":"ContainerStarted","Data":"a1795044a6fb1451da21fb1ea312f05d5d29631cc8c8b0d42c0d9690cb84588a"} Jan 31 05:44:01 crc kubenswrapper[5050]: I0131 05:44:01.090875 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.090850712 podStartE2EDuration="2.090850712s" podCreationTimestamp="2026-01-31 05:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:44:01.081570269 +0000 UTC m=+1366.130731865" watchObservedRunningTime="2026-01-31 05:44:01.090850712 +0000 UTC m=+1366.140012348" Jan 31 05:44:01 crc kubenswrapper[5050]: I0131 05:44:01.748685 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a9cb357-6854-412f-8fe6-d7c4404ecbc9" path="/var/lib/kubelet/pods/2a9cb357-6854-412f-8fe6-d7c4404ecbc9/volumes" Jan 31 05:44:01 crc kubenswrapper[5050]: I0131 05:44:01.749606 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab" path="/var/lib/kubelet/pods/71ef7c46-e0f1-47e0-bfb5-95e4b9a008ab/volumes" Jan 31 05:44:02 crc kubenswrapper[5050]: I0131 05:44:02.082350 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70ac4fa7-405d-4fc5-b6eb-46774c40cbec","Type":"ContainerStarted","Data":"1079b70cc8222cebe0f0600ef69e9cc3ccab38652759764af2e09ec4c0ba2951"} Jan 31 05:44:02 crc kubenswrapper[5050]: I0131 05:44:02.082405 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70ac4fa7-405d-4fc5-b6eb-46774c40cbec","Type":"ContainerStarted","Data":"f4889bcd9a872bf370c577b9c6f3d8f81e36dea1c9ba84367126fb1a7d75a684"} Jan 31 05:44:02 crc kubenswrapper[5050]: I0131 05:44:02.088697 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"334fee19-d725-4b1f-85f2-03d26fa6e09e","Type":"ContainerStarted","Data":"d321d7aecad4bab26520602bb813dfa3e5b843674e8ec633566fbff8b1e79c69"} Jan 31 05:44:02 crc kubenswrapper[5050]: I0131 05:44:02.088793 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"334fee19-d725-4b1f-85f2-03d26fa6e09e","Type":"ContainerStarted","Data":"e9de7bf901867f57f593405f1b6527afb242c39d024fc285a7de61b0566a6714"} Jan 31 05:44:02 crc kubenswrapper[5050]: I0131 05:44:02.122860 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.122836584 podStartE2EDuration="2.122836584s" podCreationTimestamp="2026-01-31 05:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:44:02.111357202 +0000 UTC m=+1367.160518818" watchObservedRunningTime="2026-01-31 05:44:02.122836584 +0000 UTC m=+1367.171998190" Jan 31 05:44:02 crc kubenswrapper[5050]: I0131 05:44:02.146816 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.146794975 podStartE2EDuration="2.146794975s" podCreationTimestamp="2026-01-31 05:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:44:02.133547165 +0000 UTC m=+1367.182708771" watchObservedRunningTime="2026-01-31 05:44:02.146794975 +0000 UTC m=+1367.195956571" Jan 31 05:44:04 crc kubenswrapper[5050]: I0131 05:44:04.430231 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 05:44:05 crc kubenswrapper[5050]: I0131 05:44:05.468330 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 05:44:05 crc kubenswrapper[5050]: I0131 05:44:05.469316 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 05:44:09 crc kubenswrapper[5050]: I0131 05:44:09.018431 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:44:09 crc kubenswrapper[5050]: I0131 05:44:09.018914 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:44:09 crc kubenswrapper[5050]: I0131 05:44:09.430433 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 05:44:09 crc kubenswrapper[5050]: I0131 05:44:09.485506 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 05:44:09 crc kubenswrapper[5050]: I0131 05:44:09.525923 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 05:44:10 crc kubenswrapper[5050]: I0131 05:44:10.199515 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 05:44:10 crc kubenswrapper[5050]: I0131 05:44:10.468537 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 05:44:10 crc kubenswrapper[5050]: I0131 05:44:10.468618 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 05:44:10 crc kubenswrapper[5050]: I0131 05:44:10.506208 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 05:44:10 crc kubenswrapper[5050]: I0131 05:44:10.506280 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 05:44:11 crc kubenswrapper[5050]: I0131 05:44:11.481124 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="70ac4fa7-405d-4fc5-b6eb-46774c40cbec" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:44:11 crc kubenswrapper[5050]: I0131 05:44:11.481155 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="70ac4fa7-405d-4fc5-b6eb-46774c40cbec" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:44:11 crc kubenswrapper[5050]: I0131 05:44:11.513162 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="334fee19-d725-4b1f-85f2-03d26fa6e09e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.192:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:44:11 crc kubenswrapper[5050]: I0131 05:44:11.513187 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="334fee19-d725-4b1f-85f2-03d26fa6e09e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.192:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:44:20 crc kubenswrapper[5050]: I0131 05:44:20.476135 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 05:44:20 crc kubenswrapper[5050]: I0131 05:44:20.477634 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 05:44:20 crc kubenswrapper[5050]: I0131 05:44:20.485478 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 05:44:20 crc kubenswrapper[5050]: I0131 05:44:20.512942 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 05:44:20 crc kubenswrapper[5050]: I0131 05:44:20.513383 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 05:44:20 crc kubenswrapper[5050]: I0131 05:44:20.515428 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 05:44:20 crc kubenswrapper[5050]: I0131 05:44:20.520128 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 05:44:21 crc kubenswrapper[5050]: I0131 05:44:21.278251 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 05:44:21 crc kubenswrapper[5050]: I0131 05:44:21.281683 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 05:44:21 crc kubenswrapper[5050]: I0131 05:44:21.296798 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 05:44:29 crc kubenswrapper[5050]: I0131 05:44:29.224927 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:44:30 crc kubenswrapper[5050]: I0131 05:44:30.074128 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:44:33 crc kubenswrapper[5050]: I0131 05:44:33.262072 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b3fa70dc-40c9-4b8a-8239-d785f140d5d2" containerName="rabbitmq" containerID="cri-o://13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1" gracePeriod=604796 Jan 31 05:44:34 crc kubenswrapper[5050]: I0131 05:44:34.396744 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="faec33cd-ecd1-4244-abb0-c5a27441abd2" containerName="rabbitmq" containerID="cri-o://c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702" gracePeriod=604796 Jan 31 05:44:38 crc kubenswrapper[5050]: I0131 05:44:38.623927 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b3fa70dc-40c9-4b8a-8239-d785f140d5d2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 31 05:44:38 crc kubenswrapper[5050]: I0131 05:44:38.909128 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="faec33cd-ecd1-4244-abb0-c5a27441abd2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 31 05:44:39 crc kubenswrapper[5050]: I0131 05:44:39.017578 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:44:39 crc kubenswrapper[5050]: I0131 05:44:39.017927 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:44:39 crc kubenswrapper[5050]: I0131 05:44:39.018196 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:44:39 crc kubenswrapper[5050]: I0131 05:44:39.019158 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a251b39bb9c1d28bca8640aed32573ece3622a90bd61ebf25455027ba42bf7e7"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:44:39 crc kubenswrapper[5050]: I0131 05:44:39.019444 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://a251b39bb9c1d28bca8640aed32573ece3622a90bd61ebf25455027ba42bf7e7" gracePeriod=600 Jan 31 05:44:39 crc kubenswrapper[5050]: I0131 05:44:39.507284 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="a251b39bb9c1d28bca8640aed32573ece3622a90bd61ebf25455027ba42bf7e7" exitCode=0 Jan 31 05:44:39 crc kubenswrapper[5050]: I0131 05:44:39.507336 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"a251b39bb9c1d28bca8640aed32573ece3622a90bd61ebf25455027ba42bf7e7"} Jan 31 05:44:39 crc kubenswrapper[5050]: I0131 05:44:39.507793 5050 scope.go:117] "RemoveContainer" containerID="37867fe0b3a3a54da7bcbf64f0d3572ca6af3a27ac44fef3f2c635dee432f98f" Jan 31 05:44:39 crc kubenswrapper[5050]: I0131 05:44:39.995764 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.153477 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-tls\") pod \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.153866 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-plugins\") pod \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.153918 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk8ng\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-kube-api-access-hk8ng\") pod \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.154012 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-confd\") pod \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.154051 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-server-conf\") pod \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.154101 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.154140 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-pod-info\") pod \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.154191 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-plugins-conf\") pod \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.154227 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-erlang-cookie-secret\") pod \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.154267 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-config-data\") pod \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.154299 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-erlang-cookie\") pod \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\" (UID: \"b3fa70dc-40c9-4b8a-8239-d785f140d5d2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.155402 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b3fa70dc-40c9-4b8a-8239-d785f140d5d2" (UID: "b3fa70dc-40c9-4b8a-8239-d785f140d5d2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.157351 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b3fa70dc-40c9-4b8a-8239-d785f140d5d2" (UID: "b3fa70dc-40c9-4b8a-8239-d785f140d5d2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.157417 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b3fa70dc-40c9-4b8a-8239-d785f140d5d2" (UID: "b3fa70dc-40c9-4b8a-8239-d785f140d5d2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.175131 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-pod-info" (OuterVolumeSpecName: "pod-info") pod "b3fa70dc-40c9-4b8a-8239-d785f140d5d2" (UID: "b3fa70dc-40c9-4b8a-8239-d785f140d5d2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.181833 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b3fa70dc-40c9-4b8a-8239-d785f140d5d2" (UID: "b3fa70dc-40c9-4b8a-8239-d785f140d5d2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.197634 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b3fa70dc-40c9-4b8a-8239-d785f140d5d2" (UID: "b3fa70dc-40c9-4b8a-8239-d785f140d5d2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.199520 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-kube-api-access-hk8ng" (OuterVolumeSpecName: "kube-api-access-hk8ng") pod "b3fa70dc-40c9-4b8a-8239-d785f140d5d2" (UID: "b3fa70dc-40c9-4b8a-8239-d785f140d5d2"). InnerVolumeSpecName "kube-api-access-hk8ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.209110 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "b3fa70dc-40c9-4b8a-8239-d785f140d5d2" (UID: "b3fa70dc-40c9-4b8a-8239-d785f140d5d2"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.244997 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-config-data" (OuterVolumeSpecName: "config-data") pod "b3fa70dc-40c9-4b8a-8239-d785f140d5d2" (UID: "b3fa70dc-40c9-4b8a-8239-d785f140d5d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.258102 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.258132 5050 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.258142 5050 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.258151 5050 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.258160 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.258170 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.258178 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.258185 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.258194 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk8ng\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-kube-api-access-hk8ng\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.333755 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.334469 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-server-conf" (OuterVolumeSpecName: "server-conf") pod "b3fa70dc-40c9-4b8a-8239-d785f140d5d2" (UID: "b3fa70dc-40c9-4b8a-8239-d785f140d5d2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.360266 5050 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.360587 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.406392 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b3fa70dc-40c9-4b8a-8239-d785f140d5d2" (UID: "b3fa70dc-40c9-4b8a-8239-d785f140d5d2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.461603 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3fa70dc-40c9-4b8a-8239-d785f140d5d2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.517298 5050 generic.go:334] "Generic (PLEG): container finished" podID="b3fa70dc-40c9-4b8a-8239-d785f140d5d2" containerID="13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1" exitCode=0 Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.517360 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3fa70dc-40c9-4b8a-8239-d785f140d5d2","Type":"ContainerDied","Data":"13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1"} Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.517385 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3fa70dc-40c9-4b8a-8239-d785f140d5d2","Type":"ContainerDied","Data":"65a8d4e33ab1ae577eb8a76e29128f4020b9ddc80d3efa7f44623d0edbc34290"} Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.517401 5050 scope.go:117] "RemoveContainer" containerID="13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.517489 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.528680 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51"} Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.552144 5050 scope.go:117] "RemoveContainer" containerID="42ffb73fdfb7465785c2d4a666d37d70e31b4cc380a7d5f5bf700de51d819c7d" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.568385 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.576754 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.598827 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:44:40 crc kubenswrapper[5050]: E0131 05:44:40.599199 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3fa70dc-40c9-4b8a-8239-d785f140d5d2" containerName="rabbitmq" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.599218 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3fa70dc-40c9-4b8a-8239-d785f140d5d2" containerName="rabbitmq" Jan 31 05:44:40 crc kubenswrapper[5050]: E0131 05:44:40.599235 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3fa70dc-40c9-4b8a-8239-d785f140d5d2" containerName="setup-container" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.599240 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3fa70dc-40c9-4b8a-8239-d785f140d5d2" containerName="setup-container" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.599401 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3fa70dc-40c9-4b8a-8239-d785f140d5d2" containerName="rabbitmq" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.600199 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.603796 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.604180 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.603813 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.604477 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.605309 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ddl55" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.605473 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.606933 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.618462 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.664984 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.665101 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.665189 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.665275 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.665359 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.665437 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.665548 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.665632 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.665729 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbpkx\" (UniqueName: \"kubernetes.io/projected/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-kube-api-access-qbpkx\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.665810 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-config-data\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.665879 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.669710 5050 scope.go:117] "RemoveContainer" containerID="13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1" Jan 31 05:44:40 crc kubenswrapper[5050]: E0131 05:44:40.670190 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1\": container with ID starting with 13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1 not found: ID does not exist" containerID="13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.670217 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1"} err="failed to get container status \"13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1\": rpc error: code = NotFound desc = could not find container \"13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1\": container with ID starting with 13ce102ff556f422846584849abb49cb4d4010f25dcc26099b86c398846cf8a1 not found: ID does not exist" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.670253 5050 scope.go:117] "RemoveContainer" containerID="42ffb73fdfb7465785c2d4a666d37d70e31b4cc380a7d5f5bf700de51d819c7d" Jan 31 05:44:40 crc kubenswrapper[5050]: E0131 05:44:40.670505 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42ffb73fdfb7465785c2d4a666d37d70e31b4cc380a7d5f5bf700de51d819c7d\": container with ID starting with 42ffb73fdfb7465785c2d4a666d37d70e31b4cc380a7d5f5bf700de51d819c7d not found: ID does not exist" containerID="42ffb73fdfb7465785c2d4a666d37d70e31b4cc380a7d5f5bf700de51d819c7d" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.670525 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42ffb73fdfb7465785c2d4a666d37d70e31b4cc380a7d5f5bf700de51d819c7d"} err="failed to get container status \"42ffb73fdfb7465785c2d4a666d37d70e31b4cc380a7d5f5bf700de51d819c7d\": rpc error: code = NotFound desc = could not find container \"42ffb73fdfb7465785c2d4a666d37d70e31b4cc380a7d5f5bf700de51d819c7d\": container with ID starting with 42ffb73fdfb7465785c2d4a666d37d70e31b4cc380a7d5f5bf700de51d819c7d not found: ID does not exist" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.767033 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.767268 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.767300 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.767329 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.767358 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.767384 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.767424 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.767445 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.767473 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbpkx\" (UniqueName: \"kubernetes.io/projected/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-kube-api-access-qbpkx\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.767496 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-config-data\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.767513 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.771820 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.772794 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.772824 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.772843 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.772945 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.773576 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-config-data\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.774818 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.775649 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.772551 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.779386 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.790237 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbpkx\" (UniqueName: \"kubernetes.io/projected/2ec9e71b-ac09-44f7-8e06-6b628508c7ad-kube-api-access-qbpkx\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.811678 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2ec9e71b-ac09-44f7-8e06-6b628508c7ad\") " pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.867219 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.931587 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.969996 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-plugins\") pod \"faec33cd-ecd1-4244-abb0-c5a27441abd2\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.970042 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/faec33cd-ecd1-4244-abb0-c5a27441abd2-erlang-cookie-secret\") pod \"faec33cd-ecd1-4244-abb0-c5a27441abd2\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.970078 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-erlang-cookie\") pod \"faec33cd-ecd1-4244-abb0-c5a27441abd2\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.970172 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/faec33cd-ecd1-4244-abb0-c5a27441abd2-pod-info\") pod \"faec33cd-ecd1-4244-abb0-c5a27441abd2\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.970190 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-tls\") pod \"faec33cd-ecd1-4244-abb0-c5a27441abd2\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.970240 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-config-data\") pod \"faec33cd-ecd1-4244-abb0-c5a27441abd2\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.970275 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-server-conf\") pod \"faec33cd-ecd1-4244-abb0-c5a27441abd2\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.970321 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x67d\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-kube-api-access-6x67d\") pod \"faec33cd-ecd1-4244-abb0-c5a27441abd2\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.970346 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"faec33cd-ecd1-4244-abb0-c5a27441abd2\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.970384 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-plugins-conf\") pod \"faec33cd-ecd1-4244-abb0-c5a27441abd2\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.970417 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-confd\") pod \"faec33cd-ecd1-4244-abb0-c5a27441abd2\" (UID: \"faec33cd-ecd1-4244-abb0-c5a27441abd2\") " Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.977075 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "faec33cd-ecd1-4244-abb0-c5a27441abd2" (UID: "faec33cd-ecd1-4244-abb0-c5a27441abd2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.977315 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "faec33cd-ecd1-4244-abb0-c5a27441abd2" (UID: "faec33cd-ecd1-4244-abb0-c5a27441abd2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.980223 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-kube-api-access-6x67d" (OuterVolumeSpecName: "kube-api-access-6x67d") pod "faec33cd-ecd1-4244-abb0-c5a27441abd2" (UID: "faec33cd-ecd1-4244-abb0-c5a27441abd2"). InnerVolumeSpecName "kube-api-access-6x67d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.981071 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "faec33cd-ecd1-4244-abb0-c5a27441abd2" (UID: "faec33cd-ecd1-4244-abb0-c5a27441abd2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.981365 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "faec33cd-ecd1-4244-abb0-c5a27441abd2" (UID: "faec33cd-ecd1-4244-abb0-c5a27441abd2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.984323 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "faec33cd-ecd1-4244-abb0-c5a27441abd2" (UID: "faec33cd-ecd1-4244-abb0-c5a27441abd2"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.986030 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faec33cd-ecd1-4244-abb0-c5a27441abd2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "faec33cd-ecd1-4244-abb0-c5a27441abd2" (UID: "faec33cd-ecd1-4244-abb0-c5a27441abd2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:44:40 crc kubenswrapper[5050]: I0131 05:44:40.990118 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/faec33cd-ecd1-4244-abb0-c5a27441abd2-pod-info" (OuterVolumeSpecName: "pod-info") pod "faec33cd-ecd1-4244-abb0-c5a27441abd2" (UID: "faec33cd-ecd1-4244-abb0-c5a27441abd2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.004385 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-config-data" (OuterVolumeSpecName: "config-data") pod "faec33cd-ecd1-4244-abb0-c5a27441abd2" (UID: "faec33cd-ecd1-4244-abb0-c5a27441abd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.033372 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-server-conf" (OuterVolumeSpecName: "server-conf") pod "faec33cd-ecd1-4244-abb0-c5a27441abd2" (UID: "faec33cd-ecd1-4244-abb0-c5a27441abd2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.075314 5050 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/faec33cd-ecd1-4244-abb0-c5a27441abd2-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.075647 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.075694 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.075705 5050 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.075717 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x67d\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-kube-api-access-6x67d\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.075779 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.075791 5050 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/faec33cd-ecd1-4244-abb0-c5a27441abd2-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.075800 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.075864 5050 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/faec33cd-ecd1-4244-abb0-c5a27441abd2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.075876 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.103121 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "faec33cd-ecd1-4244-abb0-c5a27441abd2" (UID: "faec33cd-ecd1-4244-abb0-c5a27441abd2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.113205 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.177553 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/faec33cd-ecd1-4244-abb0-c5a27441abd2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.177601 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.379614 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.559438 5050 generic.go:334] "Generic (PLEG): container finished" podID="faec33cd-ecd1-4244-abb0-c5a27441abd2" containerID="c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702" exitCode=0 Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.559565 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.559632 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"faec33cd-ecd1-4244-abb0-c5a27441abd2","Type":"ContainerDied","Data":"c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702"} Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.559792 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"faec33cd-ecd1-4244-abb0-c5a27441abd2","Type":"ContainerDied","Data":"c737df552da7e8b2db293133ae47c07759c7160574545979c12843ffbdef1eb2"} Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.559902 5050 scope.go:117] "RemoveContainer" containerID="c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.569757 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ec9e71b-ac09-44f7-8e06-6b628508c7ad","Type":"ContainerStarted","Data":"f0b21f648371c7771808a950cd98334bec8c00e026ec9297e95e2caca914aae3"} Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.579573 5050 scope.go:117] "RemoveContainer" containerID="908370e323fbd20dcd8765438ac1ee820a6d0d5bbfe33c1d484ee9ff5821aa73" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.608275 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.615277 5050 scope.go:117] "RemoveContainer" containerID="c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702" Jan 31 05:44:41 crc kubenswrapper[5050]: E0131 05:44:41.616173 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702\": container with ID starting with c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702 not found: ID does not exist" containerID="c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.616228 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702"} err="failed to get container status \"c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702\": rpc error: code = NotFound desc = could not find container \"c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702\": container with ID starting with c91445d83b8f8ad5af7bbea5cbfb52a744fa2255bacbc0e7c864a133e1d2d702 not found: ID does not exist" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.616262 5050 scope.go:117] "RemoveContainer" containerID="908370e323fbd20dcd8765438ac1ee820a6d0d5bbfe33c1d484ee9ff5821aa73" Jan 31 05:44:41 crc kubenswrapper[5050]: E0131 05:44:41.616757 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"908370e323fbd20dcd8765438ac1ee820a6d0d5bbfe33c1d484ee9ff5821aa73\": container with ID starting with 908370e323fbd20dcd8765438ac1ee820a6d0d5bbfe33c1d484ee9ff5821aa73 not found: ID does not exist" containerID="908370e323fbd20dcd8765438ac1ee820a6d0d5bbfe33c1d484ee9ff5821aa73" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.616782 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"908370e323fbd20dcd8765438ac1ee820a6d0d5bbfe33c1d484ee9ff5821aa73"} err="failed to get container status \"908370e323fbd20dcd8765438ac1ee820a6d0d5bbfe33c1d484ee9ff5821aa73\": rpc error: code = NotFound desc = could not find container \"908370e323fbd20dcd8765438ac1ee820a6d0d5bbfe33c1d484ee9ff5821aa73\": container with ID starting with 908370e323fbd20dcd8765438ac1ee820a6d0d5bbfe33c1d484ee9ff5821aa73 not found: ID does not exist" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.626740 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.684783 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:44:41 crc kubenswrapper[5050]: E0131 05:44:41.685497 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faec33cd-ecd1-4244-abb0-c5a27441abd2" containerName="setup-container" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.685514 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="faec33cd-ecd1-4244-abb0-c5a27441abd2" containerName="setup-container" Jan 31 05:44:41 crc kubenswrapper[5050]: E0131 05:44:41.685525 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faec33cd-ecd1-4244-abb0-c5a27441abd2" containerName="rabbitmq" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.685532 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="faec33cd-ecd1-4244-abb0-c5a27441abd2" containerName="rabbitmq" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.685707 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="faec33cd-ecd1-4244-abb0-c5a27441abd2" containerName="rabbitmq" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.686600 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.689503 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.689632 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.689676 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.689836 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.689932 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.689974 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xnht7" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.690225 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.710548 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.748368 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3fa70dc-40c9-4b8a-8239-d785f140d5d2" path="/var/lib/kubelet/pods/b3fa70dc-40c9-4b8a-8239-d785f140d5d2/volumes" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.749270 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faec33cd-ecd1-4244-abb0-c5a27441abd2" path="/var/lib/kubelet/pods/faec33cd-ecd1-4244-abb0-c5a27441abd2/volumes" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.802369 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8afb23c2-9926-4b29-b474-ba4f89f261aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.802421 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8afb23c2-9926-4b29-b474-ba4f89f261aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.802463 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8afb23c2-9926-4b29-b474-ba4f89f261aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.802492 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8afb23c2-9926-4b29-b474-ba4f89f261aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.802522 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.802581 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8afb23c2-9926-4b29-b474-ba4f89f261aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.802748 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m98wp\" (UniqueName: \"kubernetes.io/projected/8afb23c2-9926-4b29-b474-ba4f89f261aa-kube-api-access-m98wp\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.802856 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8afb23c2-9926-4b29-b474-ba4f89f261aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.803019 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8afb23c2-9926-4b29-b474-ba4f89f261aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.803041 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8afb23c2-9926-4b29-b474-ba4f89f261aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.803256 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8afb23c2-9926-4b29-b474-ba4f89f261aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.904566 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8afb23c2-9926-4b29-b474-ba4f89f261aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.904625 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.904678 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8afb23c2-9926-4b29-b474-ba4f89f261aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.904728 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m98wp\" (UniqueName: \"kubernetes.io/projected/8afb23c2-9926-4b29-b474-ba4f89f261aa-kube-api-access-m98wp\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.904786 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8afb23c2-9926-4b29-b474-ba4f89f261aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.904833 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8afb23c2-9926-4b29-b474-ba4f89f261aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.904851 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8afb23c2-9926-4b29-b474-ba4f89f261aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.904894 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8afb23c2-9926-4b29-b474-ba4f89f261aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.904929 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8afb23c2-9926-4b29-b474-ba4f89f261aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.904971 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8afb23c2-9926-4b29-b474-ba4f89f261aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.905002 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8afb23c2-9926-4b29-b474-ba4f89f261aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.906931 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8afb23c2-9926-4b29-b474-ba4f89f261aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.906931 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8afb23c2-9926-4b29-b474-ba4f89f261aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.906947 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8afb23c2-9926-4b29-b474-ba4f89f261aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.906947 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8afb23c2-9926-4b29-b474-ba4f89f261aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.907840 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8afb23c2-9926-4b29-b474-ba4f89f261aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.908222 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.911916 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8afb23c2-9926-4b29-b474-ba4f89f261aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.912436 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8afb23c2-9926-4b29-b474-ba4f89f261aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.916781 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8afb23c2-9926-4b29-b474-ba4f89f261aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.922571 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8afb23c2-9926-4b29-b474-ba4f89f261aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.933668 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m98wp\" (UniqueName: \"kubernetes.io/projected/8afb23c2-9926-4b29-b474-ba4f89f261aa-kube-api-access-m98wp\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:41 crc kubenswrapper[5050]: I0131 05:44:41.939187 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8afb23c2-9926-4b29-b474-ba4f89f261aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:42 crc kubenswrapper[5050]: I0131 05:44:42.017175 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:44:42 crc kubenswrapper[5050]: I0131 05:44:42.271489 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:44:42 crc kubenswrapper[5050]: I0131 05:44:42.583345 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8afb23c2-9926-4b29-b474-ba4f89f261aa","Type":"ContainerStarted","Data":"9a380feebc2ba75a3090c3ef72fe8970d8d4f99cbcb3c193d296e88d88b0e82a"} Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.276531 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-24nw8"] Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.277843 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.287071 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.289300 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-24nw8"] Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.437834 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf7sz\" (UniqueName: \"kubernetes.io/projected/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-kube-api-access-kf7sz\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.437882 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.438057 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-config\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.438143 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.438172 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.438204 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.540505 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-config\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.540638 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.540688 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.540737 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.540871 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf7sz\" (UniqueName: \"kubernetes.io/projected/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-kube-api-access-kf7sz\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.540911 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.542232 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.542707 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-config\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.542744 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.542761 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.542818 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.563775 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf7sz\" (UniqueName: \"kubernetes.io/projected/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-kube-api-access-kf7sz\") pod \"dnsmasq-dns-6447ccbd8f-24nw8\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.596253 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:43 crc kubenswrapper[5050]: I0131 05:44:43.613805 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ec9e71b-ac09-44f7-8e06-6b628508c7ad","Type":"ContainerStarted","Data":"26b09029fd3f51e4f48b788c3a7e2d620f8a97579b5aab48b53715e19d08d652"} Jan 31 05:44:44 crc kubenswrapper[5050]: I0131 05:44:44.066903 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-24nw8"] Jan 31 05:44:44 crc kubenswrapper[5050]: W0131 05:44:44.071882 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d2c7d93_6505_4ca5_aaca_23a9b07d64bc.slice/crio-14200786f2f1e74048fb9e154bbc8141b635a20a85faa6a8a23495f713240fdf WatchSource:0}: Error finding container 14200786f2f1e74048fb9e154bbc8141b635a20a85faa6a8a23495f713240fdf: Status 404 returned error can't find the container with id 14200786f2f1e74048fb9e154bbc8141b635a20a85faa6a8a23495f713240fdf Jan 31 05:44:44 crc kubenswrapper[5050]: I0131 05:44:44.642837 5050 generic.go:334] "Generic (PLEG): container finished" podID="3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" containerID="4fee055f4375d15f0c3c10ec0b86130e8f9ebcd60601127615041e71ea0e66b2" exitCode=0 Jan 31 05:44:44 crc kubenswrapper[5050]: I0131 05:44:44.642975 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" event={"ID":"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc","Type":"ContainerDied","Data":"4fee055f4375d15f0c3c10ec0b86130e8f9ebcd60601127615041e71ea0e66b2"} Jan 31 05:44:44 crc kubenswrapper[5050]: I0131 05:44:44.643590 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" event={"ID":"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc","Type":"ContainerStarted","Data":"14200786f2f1e74048fb9e154bbc8141b635a20a85faa6a8a23495f713240fdf"} Jan 31 05:44:44 crc kubenswrapper[5050]: I0131 05:44:44.647675 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8afb23c2-9926-4b29-b474-ba4f89f261aa","Type":"ContainerStarted","Data":"0ae4d36afb9c340dbf4cf0a100ed3dde52d3d0b3af78561d6679f2d14ba495a7"} Jan 31 05:44:45 crc kubenswrapper[5050]: I0131 05:44:45.662897 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" event={"ID":"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc","Type":"ContainerStarted","Data":"717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9"} Jan 31 05:44:45 crc kubenswrapper[5050]: I0131 05:44:45.690388 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" podStartSLOduration=2.690362976 podStartE2EDuration="2.690362976s" podCreationTimestamp="2026-01-31 05:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:44:45.688791314 +0000 UTC m=+1410.737952940" watchObservedRunningTime="2026-01-31 05:44:45.690362976 +0000 UTC m=+1410.739524612" Jan 31 05:44:46 crc kubenswrapper[5050]: I0131 05:44:46.675767 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:53 crc kubenswrapper[5050]: I0131 05:44:53.598231 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:44:53 crc kubenswrapper[5050]: I0131 05:44:53.686551 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-lpqj8"] Jan 31 05:44:53 crc kubenswrapper[5050]: I0131 05:44:53.686754 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" podUID="c744dd82-741e-4835-90e2-454ad9587ff0" containerName="dnsmasq-dns" containerID="cri-o://45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1" gracePeriod=10 Jan 31 05:44:53 crc kubenswrapper[5050]: I0131 05:44:53.832077 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-kv8pz"] Jan 31 05:44:53 crc kubenswrapper[5050]: I0131 05:44:53.835916 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:53 crc kubenswrapper[5050]: I0131 05:44:53.845376 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-kv8pz"] Jan 31 05:44:53 crc kubenswrapper[5050]: I0131 05:44:53.941839 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:53 crc kubenswrapper[5050]: I0131 05:44:53.941942 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:53 crc kubenswrapper[5050]: I0131 05:44:53.942028 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:53 crc kubenswrapper[5050]: I0131 05:44:53.942061 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjcd6\" (UniqueName: \"kubernetes.io/projected/eea77a53-6357-4243-b7bd-5b98e5f15146-kube-api-access-wjcd6\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:53 crc kubenswrapper[5050]: I0131 05:44:53.942106 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:53 crc kubenswrapper[5050]: I0131 05:44:53.942156 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-config\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.043392 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.043460 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.043508 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.043537 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjcd6\" (UniqueName: \"kubernetes.io/projected/eea77a53-6357-4243-b7bd-5b98e5f15146-kube-api-access-wjcd6\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.043583 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.043613 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-config\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.044416 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-config\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.044976 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.045461 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.045938 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.046815 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.070475 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjcd6\" (UniqueName: \"kubernetes.io/projected/eea77a53-6357-4243-b7bd-5b98e5f15146-kube-api-access-wjcd6\") pod \"dnsmasq-dns-864d5fc68c-kv8pz\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.169791 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.285155 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.349190 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-ovsdbserver-nb\") pod \"c744dd82-741e-4835-90e2-454ad9587ff0\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.349263 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6fwn\" (UniqueName: \"kubernetes.io/projected/c744dd82-741e-4835-90e2-454ad9587ff0-kube-api-access-w6fwn\") pod \"c744dd82-741e-4835-90e2-454ad9587ff0\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.349390 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-dns-svc\") pod \"c744dd82-741e-4835-90e2-454ad9587ff0\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.349420 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-ovsdbserver-sb\") pod \"c744dd82-741e-4835-90e2-454ad9587ff0\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.349480 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-config\") pod \"c744dd82-741e-4835-90e2-454ad9587ff0\" (UID: \"c744dd82-741e-4835-90e2-454ad9587ff0\") " Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.356239 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c744dd82-741e-4835-90e2-454ad9587ff0-kube-api-access-w6fwn" (OuterVolumeSpecName: "kube-api-access-w6fwn") pod "c744dd82-741e-4835-90e2-454ad9587ff0" (UID: "c744dd82-741e-4835-90e2-454ad9587ff0"). InnerVolumeSpecName "kube-api-access-w6fwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.401086 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c744dd82-741e-4835-90e2-454ad9587ff0" (UID: "c744dd82-741e-4835-90e2-454ad9587ff0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.403640 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c744dd82-741e-4835-90e2-454ad9587ff0" (UID: "c744dd82-741e-4835-90e2-454ad9587ff0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.409836 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c744dd82-741e-4835-90e2-454ad9587ff0" (UID: "c744dd82-741e-4835-90e2-454ad9587ff0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.423657 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-config" (OuterVolumeSpecName: "config") pod "c744dd82-741e-4835-90e2-454ad9587ff0" (UID: "c744dd82-741e-4835-90e2-454ad9587ff0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.451263 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.451319 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.451332 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.451340 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c744dd82-741e-4835-90e2-454ad9587ff0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.451350 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6fwn\" (UniqueName: \"kubernetes.io/projected/c744dd82-741e-4835-90e2-454ad9587ff0-kube-api-access-w6fwn\") on node \"crc\" DevicePath \"\"" Jan 31 05:44:54 crc kubenswrapper[5050]: W0131 05:44:54.649699 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeea77a53_6357_4243_b7bd_5b98e5f15146.slice/crio-ea402798c33076e2d2b1dacf20bed67d766a3cce4c90254dfd401e373a3b46a4 WatchSource:0}: Error finding container ea402798c33076e2d2b1dacf20bed67d766a3cce4c90254dfd401e373a3b46a4: Status 404 returned error can't find the container with id ea402798c33076e2d2b1dacf20bed67d766a3cce4c90254dfd401e373a3b46a4 Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.655233 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-kv8pz"] Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.774732 5050 generic.go:334] "Generic (PLEG): container finished" podID="c744dd82-741e-4835-90e2-454ad9587ff0" containerID="45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1" exitCode=0 Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.774793 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" event={"ID":"c744dd82-741e-4835-90e2-454ad9587ff0","Type":"ContainerDied","Data":"45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1"} Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.774852 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" event={"ID":"c744dd82-741e-4835-90e2-454ad9587ff0","Type":"ContainerDied","Data":"3440c42d36482494ee5b2963120775f84e70dbbc8aac349b7a646366505d87ca"} Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.774870 5050 scope.go:117] "RemoveContainer" containerID="45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.774871 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.776709 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" event={"ID":"eea77a53-6357-4243-b7bd-5b98e5f15146","Type":"ContainerStarted","Data":"ea402798c33076e2d2b1dacf20bed67d766a3cce4c90254dfd401e373a3b46a4"} Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.906747 5050 scope.go:117] "RemoveContainer" containerID="fa9c86064586361e8eedbe4c5082a235c09cf8c7ff7b43285dabca8ddd8e2450" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.922689 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-lpqj8"] Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.931437 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-lpqj8"] Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.955319 5050 scope.go:117] "RemoveContainer" containerID="45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1" Jan 31 05:44:54 crc kubenswrapper[5050]: E0131 05:44:54.955831 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1\": container with ID starting with 45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1 not found: ID does not exist" containerID="45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.955865 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1"} err="failed to get container status \"45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1\": rpc error: code = NotFound desc = could not find container \"45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1\": container with ID starting with 45e4c3c477d6dc6393b51b0f4a8136fad3915ccbc98de1b775fbc1f0099b1ee1 not found: ID does not exist" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.955897 5050 scope.go:117] "RemoveContainer" containerID="fa9c86064586361e8eedbe4c5082a235c09cf8c7ff7b43285dabca8ddd8e2450" Jan 31 05:44:54 crc kubenswrapper[5050]: E0131 05:44:54.956188 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9c86064586361e8eedbe4c5082a235c09cf8c7ff7b43285dabca8ddd8e2450\": container with ID starting with fa9c86064586361e8eedbe4c5082a235c09cf8c7ff7b43285dabca8ddd8e2450 not found: ID does not exist" containerID="fa9c86064586361e8eedbe4c5082a235c09cf8c7ff7b43285dabca8ddd8e2450" Jan 31 05:44:54 crc kubenswrapper[5050]: I0131 05:44:54.956211 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9c86064586361e8eedbe4c5082a235c09cf8c7ff7b43285dabca8ddd8e2450"} err="failed to get container status \"fa9c86064586361e8eedbe4c5082a235c09cf8c7ff7b43285dabca8ddd8e2450\": rpc error: code = NotFound desc = could not find container \"fa9c86064586361e8eedbe4c5082a235c09cf8c7ff7b43285dabca8ddd8e2450\": container with ID starting with fa9c86064586361e8eedbe4c5082a235c09cf8c7ff7b43285dabca8ddd8e2450 not found: ID does not exist" Jan 31 05:44:55 crc kubenswrapper[5050]: I0131 05:44:55.773808 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c744dd82-741e-4835-90e2-454ad9587ff0" path="/var/lib/kubelet/pods/c744dd82-741e-4835-90e2-454ad9587ff0/volumes" Jan 31 05:44:55 crc kubenswrapper[5050]: I0131 05:44:55.799703 5050 generic.go:334] "Generic (PLEG): container finished" podID="eea77a53-6357-4243-b7bd-5b98e5f15146" containerID="e331e53cc62a2a2522be9c2b05c0fd6c86695366830e1ae9c3a694a8f56d96d3" exitCode=0 Jan 31 05:44:55 crc kubenswrapper[5050]: I0131 05:44:55.799749 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" event={"ID":"eea77a53-6357-4243-b7bd-5b98e5f15146","Type":"ContainerDied","Data":"e331e53cc62a2a2522be9c2b05c0fd6c86695366830e1ae9c3a694a8f56d96d3"} Jan 31 05:44:56 crc kubenswrapper[5050]: I0131 05:44:56.812159 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" event={"ID":"eea77a53-6357-4243-b7bd-5b98e5f15146","Type":"ContainerStarted","Data":"06e3203d9a7111e19cc03e440f770c3624830f77dea863981243d5ba822d346c"} Jan 31 05:44:56 crc kubenswrapper[5050]: I0131 05:44:56.812406 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:44:59 crc kubenswrapper[5050]: I0131 05:44:59.147440 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b856c5697-lpqj8" podUID="c744dd82-741e-4835-90e2-454ad9587ff0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.185:5353: i/o timeout" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.168090 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" podStartSLOduration=7.168065503 podStartE2EDuration="7.168065503s" podCreationTimestamp="2026-01-31 05:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:44:56.845016522 +0000 UTC m=+1421.894178168" watchObservedRunningTime="2026-01-31 05:45:00.168065503 +0000 UTC m=+1425.217227119" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.174764 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb"] Jan 31 05:45:00 crc kubenswrapper[5050]: E0131 05:45:00.175398 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c744dd82-741e-4835-90e2-454ad9587ff0" containerName="dnsmasq-dns" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.175429 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c744dd82-741e-4835-90e2-454ad9587ff0" containerName="dnsmasq-dns" Jan 31 05:45:00 crc kubenswrapper[5050]: E0131 05:45:00.175468 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c744dd82-741e-4835-90e2-454ad9587ff0" containerName="init" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.175481 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c744dd82-741e-4835-90e2-454ad9587ff0" containerName="init" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.175873 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c744dd82-741e-4835-90e2-454ad9587ff0" containerName="dnsmasq-dns" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.177022 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.180478 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.180608 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.192337 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb"] Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.261047 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6091ffec-2f5f-4709-9984-69e94489c3b7-config-volume\") pod \"collect-profiles-29497305-46hvb\" (UID: \"6091ffec-2f5f-4709-9984-69e94489c3b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.261129 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffssp\" (UniqueName: \"kubernetes.io/projected/6091ffec-2f5f-4709-9984-69e94489c3b7-kube-api-access-ffssp\") pod \"collect-profiles-29497305-46hvb\" (UID: \"6091ffec-2f5f-4709-9984-69e94489c3b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.261229 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6091ffec-2f5f-4709-9984-69e94489c3b7-secret-volume\") pod \"collect-profiles-29497305-46hvb\" (UID: \"6091ffec-2f5f-4709-9984-69e94489c3b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.363989 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffssp\" (UniqueName: \"kubernetes.io/projected/6091ffec-2f5f-4709-9984-69e94489c3b7-kube-api-access-ffssp\") pod \"collect-profiles-29497305-46hvb\" (UID: \"6091ffec-2f5f-4709-9984-69e94489c3b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.364026 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6091ffec-2f5f-4709-9984-69e94489c3b7-secret-volume\") pod \"collect-profiles-29497305-46hvb\" (UID: \"6091ffec-2f5f-4709-9984-69e94489c3b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.364130 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6091ffec-2f5f-4709-9984-69e94489c3b7-config-volume\") pod \"collect-profiles-29497305-46hvb\" (UID: \"6091ffec-2f5f-4709-9984-69e94489c3b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.364942 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6091ffec-2f5f-4709-9984-69e94489c3b7-config-volume\") pod \"collect-profiles-29497305-46hvb\" (UID: \"6091ffec-2f5f-4709-9984-69e94489c3b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.382446 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6091ffec-2f5f-4709-9984-69e94489c3b7-secret-volume\") pod \"collect-profiles-29497305-46hvb\" (UID: \"6091ffec-2f5f-4709-9984-69e94489c3b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.383020 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffssp\" (UniqueName: \"kubernetes.io/projected/6091ffec-2f5f-4709-9984-69e94489c3b7-kube-api-access-ffssp\") pod \"collect-profiles-29497305-46hvb\" (UID: \"6091ffec-2f5f-4709-9984-69e94489c3b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:00 crc kubenswrapper[5050]: I0131 05:45:00.504415 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:01 crc kubenswrapper[5050]: I0131 05:45:01.025354 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb"] Jan 31 05:45:01 crc kubenswrapper[5050]: W0131 05:45:01.034520 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6091ffec_2f5f_4709_9984_69e94489c3b7.slice/crio-c2dae688d61a8049a8f284e030546e39fb854829ee1582fa769acd874d1647cb WatchSource:0}: Error finding container c2dae688d61a8049a8f284e030546e39fb854829ee1582fa769acd874d1647cb: Status 404 returned error can't find the container with id c2dae688d61a8049a8f284e030546e39fb854829ee1582fa769acd874d1647cb Jan 31 05:45:01 crc kubenswrapper[5050]: I0131 05:45:01.868707 5050 generic.go:334] "Generic (PLEG): container finished" podID="6091ffec-2f5f-4709-9984-69e94489c3b7" containerID="fa359172c8f745b63d36754c025350a62f3a52f99c271a6b8857ae5260865b7f" exitCode=0 Jan 31 05:45:01 crc kubenswrapper[5050]: I0131 05:45:01.868775 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" event={"ID":"6091ffec-2f5f-4709-9984-69e94489c3b7","Type":"ContainerDied","Data":"fa359172c8f745b63d36754c025350a62f3a52f99c271a6b8857ae5260865b7f"} Jan 31 05:45:01 crc kubenswrapper[5050]: I0131 05:45:01.869195 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" event={"ID":"6091ffec-2f5f-4709-9984-69e94489c3b7","Type":"ContainerStarted","Data":"c2dae688d61a8049a8f284e030546e39fb854829ee1582fa769acd874d1647cb"} Jan 31 05:45:03 crc kubenswrapper[5050]: I0131 05:45:03.319512 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:03 crc kubenswrapper[5050]: I0131 05:45:03.426619 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6091ffec-2f5f-4709-9984-69e94489c3b7-secret-volume\") pod \"6091ffec-2f5f-4709-9984-69e94489c3b7\" (UID: \"6091ffec-2f5f-4709-9984-69e94489c3b7\") " Jan 31 05:45:03 crc kubenswrapper[5050]: I0131 05:45:03.426756 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffssp\" (UniqueName: \"kubernetes.io/projected/6091ffec-2f5f-4709-9984-69e94489c3b7-kube-api-access-ffssp\") pod \"6091ffec-2f5f-4709-9984-69e94489c3b7\" (UID: \"6091ffec-2f5f-4709-9984-69e94489c3b7\") " Jan 31 05:45:03 crc kubenswrapper[5050]: I0131 05:45:03.426992 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6091ffec-2f5f-4709-9984-69e94489c3b7-config-volume\") pod \"6091ffec-2f5f-4709-9984-69e94489c3b7\" (UID: \"6091ffec-2f5f-4709-9984-69e94489c3b7\") " Jan 31 05:45:03 crc kubenswrapper[5050]: I0131 05:45:03.427837 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6091ffec-2f5f-4709-9984-69e94489c3b7-config-volume" (OuterVolumeSpecName: "config-volume") pod "6091ffec-2f5f-4709-9984-69e94489c3b7" (UID: "6091ffec-2f5f-4709-9984-69e94489c3b7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:45:03 crc kubenswrapper[5050]: I0131 05:45:03.530261 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6091ffec-2f5f-4709-9984-69e94489c3b7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.051156 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6091ffec-2f5f-4709-9984-69e94489c3b7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6091ffec-2f5f-4709-9984-69e94489c3b7" (UID: "6091ffec-2f5f-4709-9984-69e94489c3b7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.051263 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6091ffec-2f5f-4709-9984-69e94489c3b7-kube-api-access-ffssp" (OuterVolumeSpecName: "kube-api-access-ffssp") pod "6091ffec-2f5f-4709-9984-69e94489c3b7" (UID: "6091ffec-2f5f-4709-9984-69e94489c3b7"). InnerVolumeSpecName "kube-api-access-ffssp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.064641 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" event={"ID":"6091ffec-2f5f-4709-9984-69e94489c3b7","Type":"ContainerDied","Data":"c2dae688d61a8049a8f284e030546e39fb854829ee1582fa769acd874d1647cb"} Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.064680 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2dae688d61a8049a8f284e030546e39fb854829ee1582fa769acd874d1647cb" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.064735 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.140725 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6091ffec-2f5f-4709-9984-69e94489c3b7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.141045 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffssp\" (UniqueName: \"kubernetes.io/projected/6091ffec-2f5f-4709-9984-69e94489c3b7-kube-api-access-ffssp\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.171771 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.235843 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-24nw8"] Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.236153 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" podUID="3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" containerName="dnsmasq-dns" containerID="cri-o://717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9" gracePeriod=10 Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.786897 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.858591 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-openstack-edpm-ipam\") pod \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.858746 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-ovsdbserver-sb\") pod \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.858794 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-config\") pod \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.858869 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-ovsdbserver-nb\") pod \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.858899 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf7sz\" (UniqueName: \"kubernetes.io/projected/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-kube-api-access-kf7sz\") pod \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.858981 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-dns-svc\") pod \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\" (UID: \"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc\") " Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.868197 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-kube-api-access-kf7sz" (OuterVolumeSpecName: "kube-api-access-kf7sz") pod "3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" (UID: "3d2c7d93-6505-4ca5-aaca-23a9b07d64bc"). InnerVolumeSpecName "kube-api-access-kf7sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.909126 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" (UID: "3d2c7d93-6505-4ca5-aaca-23a9b07d64bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.909758 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" (UID: "3d2c7d93-6505-4ca5-aaca-23a9b07d64bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.916056 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-config" (OuterVolumeSpecName: "config") pod "3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" (UID: "3d2c7d93-6505-4ca5-aaca-23a9b07d64bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.928750 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" (UID: "3d2c7d93-6505-4ca5-aaca-23a9b07d64bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.932687 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" (UID: "3d2c7d93-6505-4ca5-aaca-23a9b07d64bc"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.960889 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.960939 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf7sz\" (UniqueName: \"kubernetes.io/projected/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-kube-api-access-kf7sz\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.960990 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.961007 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.961025 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:04 crc kubenswrapper[5050]: I0131 05:45:04.961042 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.075017 5050 generic.go:334] "Generic (PLEG): container finished" podID="3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" containerID="717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9" exitCode=0 Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.075056 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" event={"ID":"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc","Type":"ContainerDied","Data":"717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9"} Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.075078 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" event={"ID":"3d2c7d93-6505-4ca5-aaca-23a9b07d64bc","Type":"ContainerDied","Data":"14200786f2f1e74048fb9e154bbc8141b635a20a85faa6a8a23495f713240fdf"} Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.075095 5050 scope.go:117] "RemoveContainer" containerID="717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9" Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.075204 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-24nw8" Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.110410 5050 scope.go:117] "RemoveContainer" containerID="4fee055f4375d15f0c3c10ec0b86130e8f9ebcd60601127615041e71ea0e66b2" Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.131858 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-24nw8"] Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.143584 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-24nw8"] Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.153219 5050 scope.go:117] "RemoveContainer" containerID="717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9" Jan 31 05:45:05 crc kubenswrapper[5050]: E0131 05:45:05.153660 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9\": container with ID starting with 717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9 not found: ID does not exist" containerID="717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9" Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.153717 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9"} err="failed to get container status \"717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9\": rpc error: code = NotFound desc = could not find container \"717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9\": container with ID starting with 717c0622ebb093b0a1e7a448e0df600d35d109e398ec11a94bc2f220680846b9 not found: ID does not exist" Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.153753 5050 scope.go:117] "RemoveContainer" containerID="4fee055f4375d15f0c3c10ec0b86130e8f9ebcd60601127615041e71ea0e66b2" Jan 31 05:45:05 crc kubenswrapper[5050]: E0131 05:45:05.154373 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fee055f4375d15f0c3c10ec0b86130e8f9ebcd60601127615041e71ea0e66b2\": container with ID starting with 4fee055f4375d15f0c3c10ec0b86130e8f9ebcd60601127615041e71ea0e66b2 not found: ID does not exist" containerID="4fee055f4375d15f0c3c10ec0b86130e8f9ebcd60601127615041e71ea0e66b2" Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.154558 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fee055f4375d15f0c3c10ec0b86130e8f9ebcd60601127615041e71ea0e66b2"} err="failed to get container status \"4fee055f4375d15f0c3c10ec0b86130e8f9ebcd60601127615041e71ea0e66b2\": rpc error: code = NotFound desc = could not find container \"4fee055f4375d15f0c3c10ec0b86130e8f9ebcd60601127615041e71ea0e66b2\": container with ID starting with 4fee055f4375d15f0c3c10ec0b86130e8f9ebcd60601127615041e71ea0e66b2 not found: ID does not exist" Jan 31 05:45:05 crc kubenswrapper[5050]: I0131 05:45:05.747402 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" path="/var/lib/kubelet/pods/3d2c7d93-6505-4ca5-aaca-23a9b07d64bc/volumes" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.384031 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58"] Jan 31 05:45:14 crc kubenswrapper[5050]: E0131 05:45:14.389681 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" containerName="dnsmasq-dns" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.389705 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" containerName="dnsmasq-dns" Jan 31 05:45:14 crc kubenswrapper[5050]: E0131 05:45:14.389720 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" containerName="init" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.389728 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" containerName="init" Jan 31 05:45:14 crc kubenswrapper[5050]: E0131 05:45:14.389765 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6091ffec-2f5f-4709-9984-69e94489c3b7" containerName="collect-profiles" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.389773 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6091ffec-2f5f-4709-9984-69e94489c3b7" containerName="collect-profiles" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.389981 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d2c7d93-6505-4ca5-aaca-23a9b07d64bc" containerName="dnsmasq-dns" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.390031 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6091ffec-2f5f-4709-9984-69e94489c3b7" containerName="collect-profiles" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.390737 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.393359 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.393726 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.394010 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.399150 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.409485 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58"] Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.462230 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.462310 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.462354 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.462438 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhwfk\" (UniqueName: \"kubernetes.io/projected/963e7005-964e-4472-9a34-0407ee972f9f-kube-api-access-qhwfk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.564383 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhwfk\" (UniqueName: \"kubernetes.io/projected/963e7005-964e-4472-9a34-0407ee972f9f-kube-api-access-qhwfk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.564881 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.565873 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.565931 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.571539 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.573116 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.583886 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.591305 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhwfk\" (UniqueName: \"kubernetes.io/projected/963e7005-964e-4472-9a34-0407ee972f9f-kube-api-access-qhwfk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:14 crc kubenswrapper[5050]: I0131 05:45:14.723423 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:15 crc kubenswrapper[5050]: W0131 05:45:15.280545 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod963e7005_964e_4472_9a34_0407ee972f9f.slice/crio-62a46faa2b91bd80e901505826079cce0d9397cbfb53cc1fd60ce3592b19405e WatchSource:0}: Error finding container 62a46faa2b91bd80e901505826079cce0d9397cbfb53cc1fd60ce3592b19405e: Status 404 returned error can't find the container with id 62a46faa2b91bd80e901505826079cce0d9397cbfb53cc1fd60ce3592b19405e Jan 31 05:45:15 crc kubenswrapper[5050]: I0131 05:45:15.281273 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58"] Jan 31 05:45:15 crc kubenswrapper[5050]: I0131 05:45:15.284165 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:45:16 crc kubenswrapper[5050]: I0131 05:45:16.201266 5050 generic.go:334] "Generic (PLEG): container finished" podID="2ec9e71b-ac09-44f7-8e06-6b628508c7ad" containerID="26b09029fd3f51e4f48b788c3a7e2d620f8a97579b5aab48b53715e19d08d652" exitCode=0 Jan 31 05:45:16 crc kubenswrapper[5050]: I0131 05:45:16.201428 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ec9e71b-ac09-44f7-8e06-6b628508c7ad","Type":"ContainerDied","Data":"26b09029fd3f51e4f48b788c3a7e2d620f8a97579b5aab48b53715e19d08d652"} Jan 31 05:45:16 crc kubenswrapper[5050]: I0131 05:45:16.205104 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" event={"ID":"963e7005-964e-4472-9a34-0407ee972f9f","Type":"ContainerStarted","Data":"62a46faa2b91bd80e901505826079cce0d9397cbfb53cc1fd60ce3592b19405e"} Jan 31 05:45:17 crc kubenswrapper[5050]: I0131 05:45:17.231713 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ec9e71b-ac09-44f7-8e06-6b628508c7ad","Type":"ContainerStarted","Data":"f77c0417dde264c6ba1076a69f9c46a849ce03a7b3186329d009a8d21f2453b2"} Jan 31 05:45:17 crc kubenswrapper[5050]: I0131 05:45:17.233707 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 05:45:17 crc kubenswrapper[5050]: I0131 05:45:17.237482 5050 generic.go:334] "Generic (PLEG): container finished" podID="8afb23c2-9926-4b29-b474-ba4f89f261aa" containerID="0ae4d36afb9c340dbf4cf0a100ed3dde52d3d0b3af78561d6679f2d14ba495a7" exitCode=0 Jan 31 05:45:17 crc kubenswrapper[5050]: I0131 05:45:17.237546 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8afb23c2-9926-4b29-b474-ba4f89f261aa","Type":"ContainerDied","Data":"0ae4d36afb9c340dbf4cf0a100ed3dde52d3d0b3af78561d6679f2d14ba495a7"} Jan 31 05:45:17 crc kubenswrapper[5050]: I0131 05:45:17.262666 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.262641443 podStartE2EDuration="37.262641443s" podCreationTimestamp="2026-01-31 05:44:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:45:17.260037273 +0000 UTC m=+1442.309198869" watchObservedRunningTime="2026-01-31 05:45:17.262641443 +0000 UTC m=+1442.311803039" Jan 31 05:45:18 crc kubenswrapper[5050]: I0131 05:45:18.257490 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8afb23c2-9926-4b29-b474-ba4f89f261aa","Type":"ContainerStarted","Data":"54a02197c23bab5b31507060491dd9aa2155032666123c9def591e85f65c13d8"} Jan 31 05:45:18 crc kubenswrapper[5050]: I0131 05:45:18.258763 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:45:18 crc kubenswrapper[5050]: I0131 05:45:18.297738 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.297711369 podStartE2EDuration="37.297711369s" podCreationTimestamp="2026-01-31 05:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:45:18.279574546 +0000 UTC m=+1443.328736132" watchObservedRunningTime="2026-01-31 05:45:18.297711369 +0000 UTC m=+1443.346872985" Jan 31 05:45:22 crc kubenswrapper[5050]: I0131 05:45:22.708315 5050 scope.go:117] "RemoveContainer" containerID="196feacae87f155f9194935ad93031f3dc66d064da77e65a5f9c4293ace3b7af" Jan 31 05:45:25 crc kubenswrapper[5050]: I0131 05:45:25.937350 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:45:26 crc kubenswrapper[5050]: I0131 05:45:26.336256 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" event={"ID":"963e7005-964e-4472-9a34-0407ee972f9f","Type":"ContainerStarted","Data":"ab86aa23eb5ea5decf4d2c346996e50ef008f5edd7cc55bfd2d0f878f69c482b"} Jan 31 05:45:26 crc kubenswrapper[5050]: I0131 05:45:26.366403 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" podStartSLOduration=1.715488387 podStartE2EDuration="12.36637678s" podCreationTimestamp="2026-01-31 05:45:14 +0000 UTC" firstStartedPulling="2026-01-31 05:45:15.283801882 +0000 UTC m=+1440.332963498" lastFinishedPulling="2026-01-31 05:45:25.934690285 +0000 UTC m=+1450.983851891" observedRunningTime="2026-01-31 05:45:26.359544784 +0000 UTC m=+1451.408706410" watchObservedRunningTime="2026-01-31 05:45:26.36637678 +0000 UTC m=+1451.415538396" Jan 31 05:45:30 crc kubenswrapper[5050]: I0131 05:45:30.935201 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 05:45:32 crc kubenswrapper[5050]: I0131 05:45:32.021180 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:45:37 crc kubenswrapper[5050]: I0131 05:45:37.445019 5050 generic.go:334] "Generic (PLEG): container finished" podID="963e7005-964e-4472-9a34-0407ee972f9f" containerID="ab86aa23eb5ea5decf4d2c346996e50ef008f5edd7cc55bfd2d0f878f69c482b" exitCode=0 Jan 31 05:45:37 crc kubenswrapper[5050]: I0131 05:45:37.445111 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" event={"ID":"963e7005-964e-4472-9a34-0407ee972f9f","Type":"ContainerDied","Data":"ab86aa23eb5ea5decf4d2c346996e50ef008f5edd7cc55bfd2d0f878f69c482b"} Jan 31 05:45:38 crc kubenswrapper[5050]: I0131 05:45:38.939895 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.053166 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-ssh-key-openstack-edpm-ipam\") pod \"963e7005-964e-4472-9a34-0407ee972f9f\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.053231 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-inventory\") pod \"963e7005-964e-4472-9a34-0407ee972f9f\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.053366 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhwfk\" (UniqueName: \"kubernetes.io/projected/963e7005-964e-4472-9a34-0407ee972f9f-kube-api-access-qhwfk\") pod \"963e7005-964e-4472-9a34-0407ee972f9f\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.053419 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-repo-setup-combined-ca-bundle\") pod \"963e7005-964e-4472-9a34-0407ee972f9f\" (UID: \"963e7005-964e-4472-9a34-0407ee972f9f\") " Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.061134 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963e7005-964e-4472-9a34-0407ee972f9f-kube-api-access-qhwfk" (OuterVolumeSpecName: "kube-api-access-qhwfk") pod "963e7005-964e-4472-9a34-0407ee972f9f" (UID: "963e7005-964e-4472-9a34-0407ee972f9f"). InnerVolumeSpecName "kube-api-access-qhwfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.064107 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "963e7005-964e-4472-9a34-0407ee972f9f" (UID: "963e7005-964e-4472-9a34-0407ee972f9f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.086676 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-inventory" (OuterVolumeSpecName: "inventory") pod "963e7005-964e-4472-9a34-0407ee972f9f" (UID: "963e7005-964e-4472-9a34-0407ee972f9f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.097745 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "963e7005-964e-4472-9a34-0407ee972f9f" (UID: "963e7005-964e-4472-9a34-0407ee972f9f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.154910 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhwfk\" (UniqueName: \"kubernetes.io/projected/963e7005-964e-4472-9a34-0407ee972f9f-kube-api-access-qhwfk\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.154992 5050 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.155004 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.155014 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/963e7005-964e-4472-9a34-0407ee972f9f-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.466260 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" event={"ID":"963e7005-964e-4472-9a34-0407ee972f9f","Type":"ContainerDied","Data":"62a46faa2b91bd80e901505826079cce0d9397cbfb53cc1fd60ce3592b19405e"} Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.466312 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62a46faa2b91bd80e901505826079cce0d9397cbfb53cc1fd60ce3592b19405e" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.466392 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.543999 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8"] Jan 31 05:45:39 crc kubenswrapper[5050]: E0131 05:45:39.544566 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963e7005-964e-4472-9a34-0407ee972f9f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.544584 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="963e7005-964e-4472-9a34-0407ee972f9f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.544788 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="963e7005-964e-4472-9a34-0407ee972f9f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.545326 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.549189 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.549439 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.550274 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.550323 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.554724 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8"] Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.663079 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.663134 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2stbj\" (UniqueName: \"kubernetes.io/projected/e8d56aec-90df-4428-a321-97fcf90ff7f6-kube-api-access-2stbj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.663438 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.663504 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.766214 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.766291 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.766380 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.766411 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2stbj\" (UniqueName: \"kubernetes.io/projected/e8d56aec-90df-4428-a321-97fcf90ff7f6-kube-api-access-2stbj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.771526 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.773091 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.780507 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.794815 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2stbj\" (UniqueName: \"kubernetes.io/projected/e8d56aec-90df-4428-a321-97fcf90ff7f6-kube-api-access-2stbj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:39 crc kubenswrapper[5050]: I0131 05:45:39.862164 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:45:40 crc kubenswrapper[5050]: I0131 05:45:40.441595 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8"] Jan 31 05:45:40 crc kubenswrapper[5050]: I0131 05:45:40.481168 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" event={"ID":"e8d56aec-90df-4428-a321-97fcf90ff7f6","Type":"ContainerStarted","Data":"49d618745009dd84d582ab9ac31474be8b4d30732cfeda0be814e3fa27555ea1"} Jan 31 05:45:41 crc kubenswrapper[5050]: I0131 05:45:41.495579 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" event={"ID":"e8d56aec-90df-4428-a321-97fcf90ff7f6","Type":"ContainerStarted","Data":"5309ef94180656087b9747dde22bac2730c648771bb11bacbe3c5645bf77c34a"} Jan 31 05:45:41 crc kubenswrapper[5050]: I0131 05:45:41.520773 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" podStartSLOduration=2.103318252 podStartE2EDuration="2.520749979s" podCreationTimestamp="2026-01-31 05:45:39 +0000 UTC" firstStartedPulling="2026-01-31 05:45:40.448297917 +0000 UTC m=+1465.497459513" lastFinishedPulling="2026-01-31 05:45:40.865729644 +0000 UTC m=+1465.914891240" observedRunningTime="2026-01-31 05:45:41.517586254 +0000 UTC m=+1466.566747840" watchObservedRunningTime="2026-01-31 05:45:41.520749979 +0000 UTC m=+1466.569911575" Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.050432 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7xh65"] Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.058180 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.083661 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7xh65"] Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.157989 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbctn\" (UniqueName: \"kubernetes.io/projected/b12a28d7-a615-469c-b7c0-072a4128f8f1-kube-api-access-cbctn\") pod \"community-operators-7xh65\" (UID: \"b12a28d7-a615-469c-b7c0-072a4128f8f1\") " pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.158083 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b12a28d7-a615-469c-b7c0-072a4128f8f1-utilities\") pod \"community-operators-7xh65\" (UID: \"b12a28d7-a615-469c-b7c0-072a4128f8f1\") " pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.158135 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b12a28d7-a615-469c-b7c0-072a4128f8f1-catalog-content\") pod \"community-operators-7xh65\" (UID: \"b12a28d7-a615-469c-b7c0-072a4128f8f1\") " pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.259920 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b12a28d7-a615-469c-b7c0-072a4128f8f1-utilities\") pod \"community-operators-7xh65\" (UID: \"b12a28d7-a615-469c-b7c0-072a4128f8f1\") " pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.260311 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b12a28d7-a615-469c-b7c0-072a4128f8f1-catalog-content\") pod \"community-operators-7xh65\" (UID: \"b12a28d7-a615-469c-b7c0-072a4128f8f1\") " pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.260492 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b12a28d7-a615-469c-b7c0-072a4128f8f1-utilities\") pod \"community-operators-7xh65\" (UID: \"b12a28d7-a615-469c-b7c0-072a4128f8f1\") " pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.260814 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b12a28d7-a615-469c-b7c0-072a4128f8f1-catalog-content\") pod \"community-operators-7xh65\" (UID: \"b12a28d7-a615-469c-b7c0-072a4128f8f1\") " pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.261320 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbctn\" (UniqueName: \"kubernetes.io/projected/b12a28d7-a615-469c-b7c0-072a4128f8f1-kube-api-access-cbctn\") pod \"community-operators-7xh65\" (UID: \"b12a28d7-a615-469c-b7c0-072a4128f8f1\") " pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.285720 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbctn\" (UniqueName: \"kubernetes.io/projected/b12a28d7-a615-469c-b7c0-072a4128f8f1-kube-api-access-cbctn\") pod \"community-operators-7xh65\" (UID: \"b12a28d7-a615-469c-b7c0-072a4128f8f1\") " pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.392805 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:08 crc kubenswrapper[5050]: I0131 05:46:08.890320 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7xh65"] Jan 31 05:46:09 crc kubenswrapper[5050]: I0131 05:46:09.822305 5050 generic.go:334] "Generic (PLEG): container finished" podID="b12a28d7-a615-469c-b7c0-072a4128f8f1" containerID="5d096395bc8f0d234a99935f29a0f98c3155d786e630275554790f7120527585" exitCode=0 Jan 31 05:46:09 crc kubenswrapper[5050]: I0131 05:46:09.822474 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7xh65" event={"ID":"b12a28d7-a615-469c-b7c0-072a4128f8f1","Type":"ContainerDied","Data":"5d096395bc8f0d234a99935f29a0f98c3155d786e630275554790f7120527585"} Jan 31 05:46:09 crc kubenswrapper[5050]: I0131 05:46:09.822589 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7xh65" event={"ID":"b12a28d7-a615-469c-b7c0-072a4128f8f1","Type":"ContainerStarted","Data":"1538e0905a442439d041b7d6c0d03d0429f84ab60b287efee3e62e12b858f315"} Jan 31 05:46:11 crc kubenswrapper[5050]: I0131 05:46:11.853915 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7xh65" event={"ID":"b12a28d7-a615-469c-b7c0-072a4128f8f1","Type":"ContainerStarted","Data":"a450a40cd8cd0038b815de8f3fdbbacc465350a735b390b652081c6226c187ae"} Jan 31 05:46:12 crc kubenswrapper[5050]: I0131 05:46:12.870118 5050 generic.go:334] "Generic (PLEG): container finished" podID="b12a28d7-a615-469c-b7c0-072a4128f8f1" containerID="a450a40cd8cd0038b815de8f3fdbbacc465350a735b390b652081c6226c187ae" exitCode=0 Jan 31 05:46:12 crc kubenswrapper[5050]: I0131 05:46:12.870222 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7xh65" event={"ID":"b12a28d7-a615-469c-b7c0-072a4128f8f1","Type":"ContainerDied","Data":"a450a40cd8cd0038b815de8f3fdbbacc465350a735b390b652081c6226c187ae"} Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.426285 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s5b9d"] Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.430659 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.445566 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5b9d"] Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.506855 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxkhr\" (UniqueName: \"kubernetes.io/projected/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-kube-api-access-rxkhr\") pod \"certified-operators-s5b9d\" (UID: \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\") " pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.506947 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-catalog-content\") pod \"certified-operators-s5b9d\" (UID: \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\") " pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.507243 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-utilities\") pod \"certified-operators-s5b9d\" (UID: \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\") " pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.609601 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxkhr\" (UniqueName: \"kubernetes.io/projected/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-kube-api-access-rxkhr\") pod \"certified-operators-s5b9d\" (UID: \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\") " pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.609715 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-catalog-content\") pod \"certified-operators-s5b9d\" (UID: \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\") " pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.609813 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-utilities\") pod \"certified-operators-s5b9d\" (UID: \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\") " pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.611656 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-catalog-content\") pod \"certified-operators-s5b9d\" (UID: \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\") " pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.611720 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-utilities\") pod \"certified-operators-s5b9d\" (UID: \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\") " pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.636154 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxkhr\" (UniqueName: \"kubernetes.io/projected/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-kube-api-access-rxkhr\") pod \"certified-operators-s5b9d\" (UID: \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\") " pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:14 crc kubenswrapper[5050]: I0131 05:46:14.759245 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:15 crc kubenswrapper[5050]: I0131 05:46:15.259257 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5b9d"] Jan 31 05:46:15 crc kubenswrapper[5050]: W0131 05:46:15.268565 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeb28a8f_2f7b_4627_b94b_9fccb0f1a035.slice/crio-e0a91bed192940cba558102e17e48e392e1e725987359d1a0cdd7296ce930da9 WatchSource:0}: Error finding container e0a91bed192940cba558102e17e48e392e1e725987359d1a0cdd7296ce930da9: Status 404 returned error can't find the container with id e0a91bed192940cba558102e17e48e392e1e725987359d1a0cdd7296ce930da9 Jan 31 05:46:15 crc kubenswrapper[5050]: I0131 05:46:15.919510 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5b9d" event={"ID":"beb28a8f-2f7b-4627-b94b-9fccb0f1a035","Type":"ContainerStarted","Data":"ef227f791ec628da58eb838457ad29f30b3f0a8626d036fcc2a89375f3421898"} Jan 31 05:46:15 crc kubenswrapper[5050]: I0131 05:46:15.919999 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5b9d" event={"ID":"beb28a8f-2f7b-4627-b94b-9fccb0f1a035","Type":"ContainerStarted","Data":"e0a91bed192940cba558102e17e48e392e1e725987359d1a0cdd7296ce930da9"} Jan 31 05:46:16 crc kubenswrapper[5050]: I0131 05:46:16.931589 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7xh65" event={"ID":"b12a28d7-a615-469c-b7c0-072a4128f8f1","Type":"ContainerStarted","Data":"6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a"} Jan 31 05:46:16 crc kubenswrapper[5050]: I0131 05:46:16.933842 5050 generic.go:334] "Generic (PLEG): container finished" podID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" containerID="ef227f791ec628da58eb838457ad29f30b3f0a8626d036fcc2a89375f3421898" exitCode=0 Jan 31 05:46:16 crc kubenswrapper[5050]: I0131 05:46:16.933890 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5b9d" event={"ID":"beb28a8f-2f7b-4627-b94b-9fccb0f1a035","Type":"ContainerDied","Data":"ef227f791ec628da58eb838457ad29f30b3f0a8626d036fcc2a89375f3421898"} Jan 31 05:46:16 crc kubenswrapper[5050]: I0131 05:46:16.952576 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7xh65" podStartSLOduration=2.533197289 podStartE2EDuration="8.952557356s" podCreationTimestamp="2026-01-31 05:46:08 +0000 UTC" firstStartedPulling="2026-01-31 05:46:09.825024822 +0000 UTC m=+1494.874186428" lastFinishedPulling="2026-01-31 05:46:16.244384899 +0000 UTC m=+1501.293546495" observedRunningTime="2026-01-31 05:46:16.948110585 +0000 UTC m=+1501.997272231" watchObservedRunningTime="2026-01-31 05:46:16.952557356 +0000 UTC m=+1502.001718962" Jan 31 05:46:18 crc kubenswrapper[5050]: I0131 05:46:18.393573 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:18 crc kubenswrapper[5050]: I0131 05:46:18.394037 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:19 crc kubenswrapper[5050]: I0131 05:46:19.466199 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7xh65" podUID="b12a28d7-a615-469c-b7c0-072a4128f8f1" containerName="registry-server" probeResult="failure" output=< Jan 31 05:46:19 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 05:46:19 crc kubenswrapper[5050]: > Jan 31 05:46:19 crc kubenswrapper[5050]: I0131 05:46:19.970312 5050 generic.go:334] "Generic (PLEG): container finished" podID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" containerID="3a25c68e27eedefdb7e08d0d76e9215fd83d575585bd941fd270f40cbdb6d599" exitCode=0 Jan 31 05:46:19 crc kubenswrapper[5050]: I0131 05:46:19.970373 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5b9d" event={"ID":"beb28a8f-2f7b-4627-b94b-9fccb0f1a035","Type":"ContainerDied","Data":"3a25c68e27eedefdb7e08d0d76e9215fd83d575585bd941fd270f40cbdb6d599"} Jan 31 05:46:24 crc kubenswrapper[5050]: I0131 05:46:24.018044 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5b9d" event={"ID":"beb28a8f-2f7b-4627-b94b-9fccb0f1a035","Type":"ContainerStarted","Data":"be6c91a02a20db3f7f9b32f718760bf5eeb8b65743d3510012c1b8f9a210951b"} Jan 31 05:46:24 crc kubenswrapper[5050]: I0131 05:46:24.040716 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s5b9d" podStartSLOduration=3.8024009210000003 podStartE2EDuration="10.040693303s" podCreationTimestamp="2026-01-31 05:46:14 +0000 UTC" firstStartedPulling="2026-01-31 05:46:16.936077696 +0000 UTC m=+1501.985239302" lastFinishedPulling="2026-01-31 05:46:23.174370088 +0000 UTC m=+1508.223531684" observedRunningTime="2026-01-31 05:46:24.03727219 +0000 UTC m=+1509.086433826" watchObservedRunningTime="2026-01-31 05:46:24.040693303 +0000 UTC m=+1509.089854939" Jan 31 05:46:24 crc kubenswrapper[5050]: I0131 05:46:24.759960 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:24 crc kubenswrapper[5050]: I0131 05:46:24.760009 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:25 crc kubenswrapper[5050]: I0131 05:46:25.820078 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-s5b9d" podUID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" containerName="registry-server" probeResult="failure" output=< Jan 31 05:46:25 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 05:46:25 crc kubenswrapper[5050]: > Jan 31 05:46:25 crc kubenswrapper[5050]: I0131 05:46:25.932774 5050 scope.go:117] "RemoveContainer" containerID="d7a0100a127ca366c9dc2c59b309ba982bd26b372e38b3b43419ddb2bb977412" Jan 31 05:46:25 crc kubenswrapper[5050]: I0131 05:46:25.973244 5050 scope.go:117] "RemoveContainer" containerID="46fbaf7c19b33f38f91ffdd547e097892b48ff5b4c4b0036b2be3104368a2239" Jan 31 05:46:26 crc kubenswrapper[5050]: I0131 05:46:26.035766 5050 scope.go:117] "RemoveContainer" containerID="8d827da5146bb4251f1e46f43bb0de8eb8a0e96d63a8107cd6ce87e008100091" Jan 31 05:46:26 crc kubenswrapper[5050]: I0131 05:46:26.064019 5050 scope.go:117] "RemoveContainer" containerID="95b0d44235c9900d92630ad20c3542a014ae2cfb79568d614357eaf25852048e" Jan 31 05:46:26 crc kubenswrapper[5050]: I0131 05:46:26.111360 5050 scope.go:117] "RemoveContainer" containerID="c1bda78eaf69c98db29e69077ed67d9a60cab756becd6bd3de755df17c870d7d" Jan 31 05:46:26 crc kubenswrapper[5050]: I0131 05:46:26.147133 5050 scope.go:117] "RemoveContainer" containerID="772612338aa9c9ef17a1f751410aabf602bfd8393c0a751387afb1cecc31ae06" Jan 31 05:46:28 crc kubenswrapper[5050]: I0131 05:46:28.470760 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:28 crc kubenswrapper[5050]: I0131 05:46:28.551290 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:28 crc kubenswrapper[5050]: I0131 05:46:28.723729 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7xh65"] Jan 31 05:46:30 crc kubenswrapper[5050]: I0131 05:46:30.086026 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7xh65" podUID="b12a28d7-a615-469c-b7c0-072a4128f8f1" containerName="registry-server" containerID="cri-o://6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a" gracePeriod=2 Jan 31 05:46:30 crc kubenswrapper[5050]: I0131 05:46:30.578260 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:30 crc kubenswrapper[5050]: I0131 05:46:30.641934 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b12a28d7-a615-469c-b7c0-072a4128f8f1-catalog-content\") pod \"b12a28d7-a615-469c-b7c0-072a4128f8f1\" (UID: \"b12a28d7-a615-469c-b7c0-072a4128f8f1\") " Jan 31 05:46:30 crc kubenswrapper[5050]: I0131 05:46:30.642017 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b12a28d7-a615-469c-b7c0-072a4128f8f1-utilities\") pod \"b12a28d7-a615-469c-b7c0-072a4128f8f1\" (UID: \"b12a28d7-a615-469c-b7c0-072a4128f8f1\") " Jan 31 05:46:30 crc kubenswrapper[5050]: I0131 05:46:30.642211 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbctn\" (UniqueName: \"kubernetes.io/projected/b12a28d7-a615-469c-b7c0-072a4128f8f1-kube-api-access-cbctn\") pod \"b12a28d7-a615-469c-b7c0-072a4128f8f1\" (UID: \"b12a28d7-a615-469c-b7c0-072a4128f8f1\") " Jan 31 05:46:30 crc kubenswrapper[5050]: I0131 05:46:30.642493 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b12a28d7-a615-469c-b7c0-072a4128f8f1-utilities" (OuterVolumeSpecName: "utilities") pod "b12a28d7-a615-469c-b7c0-072a4128f8f1" (UID: "b12a28d7-a615-469c-b7c0-072a4128f8f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:46:30 crc kubenswrapper[5050]: I0131 05:46:30.642711 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b12a28d7-a615-469c-b7c0-072a4128f8f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:46:30 crc kubenswrapper[5050]: I0131 05:46:30.652120 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b12a28d7-a615-469c-b7c0-072a4128f8f1-kube-api-access-cbctn" (OuterVolumeSpecName: "kube-api-access-cbctn") pod "b12a28d7-a615-469c-b7c0-072a4128f8f1" (UID: "b12a28d7-a615-469c-b7c0-072a4128f8f1"). InnerVolumeSpecName "kube-api-access-cbctn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:46:30 crc kubenswrapper[5050]: I0131 05:46:30.701756 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b12a28d7-a615-469c-b7c0-072a4128f8f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b12a28d7-a615-469c-b7c0-072a4128f8f1" (UID: "b12a28d7-a615-469c-b7c0-072a4128f8f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:46:30 crc kubenswrapper[5050]: I0131 05:46:30.745457 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbctn\" (UniqueName: \"kubernetes.io/projected/b12a28d7-a615-469c-b7c0-072a4128f8f1-kube-api-access-cbctn\") on node \"crc\" DevicePath \"\"" Jan 31 05:46:30 crc kubenswrapper[5050]: I0131 05:46:30.745505 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b12a28d7-a615-469c-b7c0-072a4128f8f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.099716 5050 generic.go:334] "Generic (PLEG): container finished" podID="b12a28d7-a615-469c-b7c0-072a4128f8f1" containerID="6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a" exitCode=0 Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.099809 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7xh65" event={"ID":"b12a28d7-a615-469c-b7c0-072a4128f8f1","Type":"ContainerDied","Data":"6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a"} Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.099863 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7xh65" event={"ID":"b12a28d7-a615-469c-b7c0-072a4128f8f1","Type":"ContainerDied","Data":"1538e0905a442439d041b7d6c0d03d0429f84ab60b287efee3e62e12b858f315"} Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.099876 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7xh65" Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.099895 5050 scope.go:117] "RemoveContainer" containerID="6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a" Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.124814 5050 scope.go:117] "RemoveContainer" containerID="a450a40cd8cd0038b815de8f3fdbbacc465350a735b390b652081c6226c187ae" Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.166189 5050 scope.go:117] "RemoveContainer" containerID="5d096395bc8f0d234a99935f29a0f98c3155d786e630275554790f7120527585" Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.168807 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7xh65"] Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.179335 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7xh65"] Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.213398 5050 scope.go:117] "RemoveContainer" containerID="6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a" Jan 31 05:46:31 crc kubenswrapper[5050]: E0131 05:46:31.214022 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a\": container with ID starting with 6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a not found: ID does not exist" containerID="6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a" Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.214085 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a"} err="failed to get container status \"6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a\": rpc error: code = NotFound desc = could not find container \"6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a\": container with ID starting with 6d229bdb938897d5c8827675c9ffed08940ddba47d2cc1432b555333f213e19a not found: ID does not exist" Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.214123 5050 scope.go:117] "RemoveContainer" containerID="a450a40cd8cd0038b815de8f3fdbbacc465350a735b390b652081c6226c187ae" Jan 31 05:46:31 crc kubenswrapper[5050]: E0131 05:46:31.214665 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a450a40cd8cd0038b815de8f3fdbbacc465350a735b390b652081c6226c187ae\": container with ID starting with a450a40cd8cd0038b815de8f3fdbbacc465350a735b390b652081c6226c187ae not found: ID does not exist" containerID="a450a40cd8cd0038b815de8f3fdbbacc465350a735b390b652081c6226c187ae" Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.214714 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a450a40cd8cd0038b815de8f3fdbbacc465350a735b390b652081c6226c187ae"} err="failed to get container status \"a450a40cd8cd0038b815de8f3fdbbacc465350a735b390b652081c6226c187ae\": rpc error: code = NotFound desc = could not find container \"a450a40cd8cd0038b815de8f3fdbbacc465350a735b390b652081c6226c187ae\": container with ID starting with a450a40cd8cd0038b815de8f3fdbbacc465350a735b390b652081c6226c187ae not found: ID does not exist" Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.214748 5050 scope.go:117] "RemoveContainer" containerID="5d096395bc8f0d234a99935f29a0f98c3155d786e630275554790f7120527585" Jan 31 05:46:31 crc kubenswrapper[5050]: E0131 05:46:31.215228 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d096395bc8f0d234a99935f29a0f98c3155d786e630275554790f7120527585\": container with ID starting with 5d096395bc8f0d234a99935f29a0f98c3155d786e630275554790f7120527585 not found: ID does not exist" containerID="5d096395bc8f0d234a99935f29a0f98c3155d786e630275554790f7120527585" Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.215273 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d096395bc8f0d234a99935f29a0f98c3155d786e630275554790f7120527585"} err="failed to get container status \"5d096395bc8f0d234a99935f29a0f98c3155d786e630275554790f7120527585\": rpc error: code = NotFound desc = could not find container \"5d096395bc8f0d234a99935f29a0f98c3155d786e630275554790f7120527585\": container with ID starting with 5d096395bc8f0d234a99935f29a0f98c3155d786e630275554790f7120527585 not found: ID does not exist" Jan 31 05:46:31 crc kubenswrapper[5050]: I0131 05:46:31.750674 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b12a28d7-a615-469c-b7c0-072a4128f8f1" path="/var/lib/kubelet/pods/b12a28d7-a615-469c-b7c0-072a4128f8f1/volumes" Jan 31 05:46:34 crc kubenswrapper[5050]: I0131 05:46:34.809287 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:34 crc kubenswrapper[5050]: I0131 05:46:34.877228 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:35 crc kubenswrapper[5050]: I0131 05:46:35.109407 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5b9d"] Jan 31 05:46:36 crc kubenswrapper[5050]: I0131 05:46:36.153050 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s5b9d" podUID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" containerName="registry-server" containerID="cri-o://be6c91a02a20db3f7f9b32f718760bf5eeb8b65743d3510012c1b8f9a210951b" gracePeriod=2 Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.161563 5050 generic.go:334] "Generic (PLEG): container finished" podID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" containerID="be6c91a02a20db3f7f9b32f718760bf5eeb8b65743d3510012c1b8f9a210951b" exitCode=0 Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.161648 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5b9d" event={"ID":"beb28a8f-2f7b-4627-b94b-9fccb0f1a035","Type":"ContainerDied","Data":"be6c91a02a20db3f7f9b32f718760bf5eeb8b65743d3510012c1b8f9a210951b"} Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.161850 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5b9d" event={"ID":"beb28a8f-2f7b-4627-b94b-9fccb0f1a035","Type":"ContainerDied","Data":"e0a91bed192940cba558102e17e48e392e1e725987359d1a0cdd7296ce930da9"} Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.161868 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a91bed192940cba558102e17e48e392e1e725987359d1a0cdd7296ce930da9" Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.168455 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.280802 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-utilities\") pod \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\" (UID: \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\") " Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.280902 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxkhr\" (UniqueName: \"kubernetes.io/projected/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-kube-api-access-rxkhr\") pod \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\" (UID: \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\") " Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.280974 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-catalog-content\") pod \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\" (UID: \"beb28a8f-2f7b-4627-b94b-9fccb0f1a035\") " Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.281339 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-utilities" (OuterVolumeSpecName: "utilities") pod "beb28a8f-2f7b-4627-b94b-9fccb0f1a035" (UID: "beb28a8f-2f7b-4627-b94b-9fccb0f1a035"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.281662 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.293206 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-kube-api-access-rxkhr" (OuterVolumeSpecName: "kube-api-access-rxkhr") pod "beb28a8f-2f7b-4627-b94b-9fccb0f1a035" (UID: "beb28a8f-2f7b-4627-b94b-9fccb0f1a035"). InnerVolumeSpecName "kube-api-access-rxkhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.325155 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "beb28a8f-2f7b-4627-b94b-9fccb0f1a035" (UID: "beb28a8f-2f7b-4627-b94b-9fccb0f1a035"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.383786 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxkhr\" (UniqueName: \"kubernetes.io/projected/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-kube-api-access-rxkhr\") on node \"crc\" DevicePath \"\"" Jan 31 05:46:37 crc kubenswrapper[5050]: I0131 05:46:37.383822 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb28a8f-2f7b-4627-b94b-9fccb0f1a035-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:46:38 crc kubenswrapper[5050]: I0131 05:46:38.169367 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5b9d" Jan 31 05:46:38 crc kubenswrapper[5050]: I0131 05:46:38.239277 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5b9d"] Jan 31 05:46:38 crc kubenswrapper[5050]: I0131 05:46:38.253131 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s5b9d"] Jan 31 05:46:39 crc kubenswrapper[5050]: I0131 05:46:39.019677 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:46:39 crc kubenswrapper[5050]: I0131 05:46:39.019756 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:46:39 crc kubenswrapper[5050]: I0131 05:46:39.757401 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" path="/var/lib/kubelet/pods/beb28a8f-2f7b-4627-b94b-9fccb0f1a035/volumes" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.841293 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n62qv"] Jan 31 05:47:07 crc kubenswrapper[5050]: E0131 05:47:07.842417 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" containerName="registry-server" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.842438 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" containerName="registry-server" Jan 31 05:47:07 crc kubenswrapper[5050]: E0131 05:47:07.842460 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12a28d7-a615-469c-b7c0-072a4128f8f1" containerName="extract-content" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.842472 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12a28d7-a615-469c-b7c0-072a4128f8f1" containerName="extract-content" Jan 31 05:47:07 crc kubenswrapper[5050]: E0131 05:47:07.842500 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12a28d7-a615-469c-b7c0-072a4128f8f1" containerName="registry-server" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.842513 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12a28d7-a615-469c-b7c0-072a4128f8f1" containerName="registry-server" Jan 31 05:47:07 crc kubenswrapper[5050]: E0131 05:47:07.842530 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" containerName="extract-utilities" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.842542 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" containerName="extract-utilities" Jan 31 05:47:07 crc kubenswrapper[5050]: E0131 05:47:07.842572 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" containerName="extract-content" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.842582 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" containerName="extract-content" Jan 31 05:47:07 crc kubenswrapper[5050]: E0131 05:47:07.842610 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12a28d7-a615-469c-b7c0-072a4128f8f1" containerName="extract-utilities" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.842620 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12a28d7-a615-469c-b7c0-072a4128f8f1" containerName="extract-utilities" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.842901 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="beb28a8f-2f7b-4627-b94b-9fccb0f1a035" containerName="registry-server" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.842927 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b12a28d7-a615-469c-b7c0-072a4128f8f1" containerName="registry-server" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.845014 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.852434 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n62qv"] Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.961275 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9gvw\" (UniqueName: \"kubernetes.io/projected/21074876-cc36-42cc-bb10-96ccb8de3d5f-kube-api-access-r9gvw\") pod \"redhat-marketplace-n62qv\" (UID: \"21074876-cc36-42cc-bb10-96ccb8de3d5f\") " pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.961735 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21074876-cc36-42cc-bb10-96ccb8de3d5f-catalog-content\") pod \"redhat-marketplace-n62qv\" (UID: \"21074876-cc36-42cc-bb10-96ccb8de3d5f\") " pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:07 crc kubenswrapper[5050]: I0131 05:47:07.961863 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21074876-cc36-42cc-bb10-96ccb8de3d5f-utilities\") pod \"redhat-marketplace-n62qv\" (UID: \"21074876-cc36-42cc-bb10-96ccb8de3d5f\") " pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:08 crc kubenswrapper[5050]: I0131 05:47:08.063564 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21074876-cc36-42cc-bb10-96ccb8de3d5f-catalog-content\") pod \"redhat-marketplace-n62qv\" (UID: \"21074876-cc36-42cc-bb10-96ccb8de3d5f\") " pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:08 crc kubenswrapper[5050]: I0131 05:47:08.064004 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21074876-cc36-42cc-bb10-96ccb8de3d5f-utilities\") pod \"redhat-marketplace-n62qv\" (UID: \"21074876-cc36-42cc-bb10-96ccb8de3d5f\") " pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:08 crc kubenswrapper[5050]: I0131 05:47:08.064119 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21074876-cc36-42cc-bb10-96ccb8de3d5f-catalog-content\") pod \"redhat-marketplace-n62qv\" (UID: \"21074876-cc36-42cc-bb10-96ccb8de3d5f\") " pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:08 crc kubenswrapper[5050]: I0131 05:47:08.064303 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9gvw\" (UniqueName: \"kubernetes.io/projected/21074876-cc36-42cc-bb10-96ccb8de3d5f-kube-api-access-r9gvw\") pod \"redhat-marketplace-n62qv\" (UID: \"21074876-cc36-42cc-bb10-96ccb8de3d5f\") " pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:08 crc kubenswrapper[5050]: I0131 05:47:08.064377 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21074876-cc36-42cc-bb10-96ccb8de3d5f-utilities\") pod \"redhat-marketplace-n62qv\" (UID: \"21074876-cc36-42cc-bb10-96ccb8de3d5f\") " pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:08 crc kubenswrapper[5050]: I0131 05:47:08.086575 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9gvw\" (UniqueName: \"kubernetes.io/projected/21074876-cc36-42cc-bb10-96ccb8de3d5f-kube-api-access-r9gvw\") pod \"redhat-marketplace-n62qv\" (UID: \"21074876-cc36-42cc-bb10-96ccb8de3d5f\") " pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:08 crc kubenswrapper[5050]: I0131 05:47:08.178169 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:08 crc kubenswrapper[5050]: I0131 05:47:08.558043 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n62qv"] Jan 31 05:47:09 crc kubenswrapper[5050]: I0131 05:47:09.018654 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:47:09 crc kubenswrapper[5050]: I0131 05:47:09.019072 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:47:09 crc kubenswrapper[5050]: I0131 05:47:09.476817 5050 generic.go:334] "Generic (PLEG): container finished" podID="21074876-cc36-42cc-bb10-96ccb8de3d5f" containerID="bfc69afe409ae068b883fcda54657526b6d9950f927cfd039dab64791065edac" exitCode=0 Jan 31 05:47:09 crc kubenswrapper[5050]: I0131 05:47:09.476862 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n62qv" event={"ID":"21074876-cc36-42cc-bb10-96ccb8de3d5f","Type":"ContainerDied","Data":"bfc69afe409ae068b883fcda54657526b6d9950f927cfd039dab64791065edac"} Jan 31 05:47:09 crc kubenswrapper[5050]: I0131 05:47:09.476891 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n62qv" event={"ID":"21074876-cc36-42cc-bb10-96ccb8de3d5f","Type":"ContainerStarted","Data":"1cf09a2096e2cbf4b3334058ccae10f2b5a4ec88852fd2abb610d5c17fc759cf"} Jan 31 05:47:10 crc kubenswrapper[5050]: I0131 05:47:10.489212 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n62qv" event={"ID":"21074876-cc36-42cc-bb10-96ccb8de3d5f","Type":"ContainerStarted","Data":"5e7f6a185272d4b84c9ebf1a5cc1b37e08739b5cea4acdb2c9ab87b84c781885"} Jan 31 05:47:11 crc kubenswrapper[5050]: I0131 05:47:11.500863 5050 generic.go:334] "Generic (PLEG): container finished" podID="21074876-cc36-42cc-bb10-96ccb8de3d5f" containerID="5e7f6a185272d4b84c9ebf1a5cc1b37e08739b5cea4acdb2c9ab87b84c781885" exitCode=0 Jan 31 05:47:11 crc kubenswrapper[5050]: I0131 05:47:11.500920 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n62qv" event={"ID":"21074876-cc36-42cc-bb10-96ccb8de3d5f","Type":"ContainerDied","Data":"5e7f6a185272d4b84c9ebf1a5cc1b37e08739b5cea4acdb2c9ab87b84c781885"} Jan 31 05:47:12 crc kubenswrapper[5050]: I0131 05:47:12.515821 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n62qv" event={"ID":"21074876-cc36-42cc-bb10-96ccb8de3d5f","Type":"ContainerStarted","Data":"4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2"} Jan 31 05:47:12 crc kubenswrapper[5050]: I0131 05:47:12.546811 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n62qv" podStartSLOduration=2.999358188 podStartE2EDuration="5.546791918s" podCreationTimestamp="2026-01-31 05:47:07 +0000 UTC" firstStartedPulling="2026-01-31 05:47:09.478753492 +0000 UTC m=+1554.527915088" lastFinishedPulling="2026-01-31 05:47:12.026187212 +0000 UTC m=+1557.075348818" observedRunningTime="2026-01-31 05:47:12.541622406 +0000 UTC m=+1557.590784082" watchObservedRunningTime="2026-01-31 05:47:12.546791918 +0000 UTC m=+1557.595953514" Jan 31 05:47:18 crc kubenswrapper[5050]: I0131 05:47:18.178496 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:18 crc kubenswrapper[5050]: I0131 05:47:18.180547 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:18 crc kubenswrapper[5050]: I0131 05:47:18.221597 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:18 crc kubenswrapper[5050]: I0131 05:47:18.621287 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:18 crc kubenswrapper[5050]: I0131 05:47:18.675619 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n62qv"] Jan 31 05:47:20 crc kubenswrapper[5050]: I0131 05:47:20.599612 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n62qv" podUID="21074876-cc36-42cc-bb10-96ccb8de3d5f" containerName="registry-server" containerID="cri-o://4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2" gracePeriod=2 Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.422465 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.547012 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21074876-cc36-42cc-bb10-96ccb8de3d5f-utilities\") pod \"21074876-cc36-42cc-bb10-96ccb8de3d5f\" (UID: \"21074876-cc36-42cc-bb10-96ccb8de3d5f\") " Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.547352 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9gvw\" (UniqueName: \"kubernetes.io/projected/21074876-cc36-42cc-bb10-96ccb8de3d5f-kube-api-access-r9gvw\") pod \"21074876-cc36-42cc-bb10-96ccb8de3d5f\" (UID: \"21074876-cc36-42cc-bb10-96ccb8de3d5f\") " Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.547598 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21074876-cc36-42cc-bb10-96ccb8de3d5f-catalog-content\") pod \"21074876-cc36-42cc-bb10-96ccb8de3d5f\" (UID: \"21074876-cc36-42cc-bb10-96ccb8de3d5f\") " Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.547851 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21074876-cc36-42cc-bb10-96ccb8de3d5f-utilities" (OuterVolumeSpecName: "utilities") pod "21074876-cc36-42cc-bb10-96ccb8de3d5f" (UID: "21074876-cc36-42cc-bb10-96ccb8de3d5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.552707 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21074876-cc36-42cc-bb10-96ccb8de3d5f-kube-api-access-r9gvw" (OuterVolumeSpecName: "kube-api-access-r9gvw") pod "21074876-cc36-42cc-bb10-96ccb8de3d5f" (UID: "21074876-cc36-42cc-bb10-96ccb8de3d5f"). InnerVolumeSpecName "kube-api-access-r9gvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.565010 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21074876-cc36-42cc-bb10-96ccb8de3d5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.565061 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9gvw\" (UniqueName: \"kubernetes.io/projected/21074876-cc36-42cc-bb10-96ccb8de3d5f-kube-api-access-r9gvw\") on node \"crc\" DevicePath \"\"" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.577797 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21074876-cc36-42cc-bb10-96ccb8de3d5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21074876-cc36-42cc-bb10-96ccb8de3d5f" (UID: "21074876-cc36-42cc-bb10-96ccb8de3d5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.608490 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n62qv" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.608458 5050 generic.go:334] "Generic (PLEG): container finished" podID="21074876-cc36-42cc-bb10-96ccb8de3d5f" containerID="4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2" exitCode=0 Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.609187 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n62qv" event={"ID":"21074876-cc36-42cc-bb10-96ccb8de3d5f","Type":"ContainerDied","Data":"4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2"} Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.609221 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n62qv" event={"ID":"21074876-cc36-42cc-bb10-96ccb8de3d5f","Type":"ContainerDied","Data":"1cf09a2096e2cbf4b3334058ccae10f2b5a4ec88852fd2abb610d5c17fc759cf"} Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.609244 5050 scope.go:117] "RemoveContainer" containerID="4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.635528 5050 scope.go:117] "RemoveContainer" containerID="5e7f6a185272d4b84c9ebf1a5cc1b37e08739b5cea4acdb2c9ab87b84c781885" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.647229 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n62qv"] Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.656702 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n62qv"] Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.666868 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21074876-cc36-42cc-bb10-96ccb8de3d5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.680408 5050 scope.go:117] "RemoveContainer" containerID="bfc69afe409ae068b883fcda54657526b6d9950f927cfd039dab64791065edac" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.706258 5050 scope.go:117] "RemoveContainer" containerID="4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2" Jan 31 05:47:21 crc kubenswrapper[5050]: E0131 05:47:21.706837 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2\": container with ID starting with 4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2 not found: ID does not exist" containerID="4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.706995 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2"} err="failed to get container status \"4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2\": rpc error: code = NotFound desc = could not find container \"4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2\": container with ID starting with 4bc1043e319292b197e70d0c31be0e71e40bfb5771e9497a5aa4572b3e75c3e2 not found: ID does not exist" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.707139 5050 scope.go:117] "RemoveContainer" containerID="5e7f6a185272d4b84c9ebf1a5cc1b37e08739b5cea4acdb2c9ab87b84c781885" Jan 31 05:47:21 crc kubenswrapper[5050]: E0131 05:47:21.707539 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7f6a185272d4b84c9ebf1a5cc1b37e08739b5cea4acdb2c9ab87b84c781885\": container with ID starting with 5e7f6a185272d4b84c9ebf1a5cc1b37e08739b5cea4acdb2c9ab87b84c781885 not found: ID does not exist" containerID="5e7f6a185272d4b84c9ebf1a5cc1b37e08739b5cea4acdb2c9ab87b84c781885" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.707569 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7f6a185272d4b84c9ebf1a5cc1b37e08739b5cea4acdb2c9ab87b84c781885"} err="failed to get container status \"5e7f6a185272d4b84c9ebf1a5cc1b37e08739b5cea4acdb2c9ab87b84c781885\": rpc error: code = NotFound desc = could not find container \"5e7f6a185272d4b84c9ebf1a5cc1b37e08739b5cea4acdb2c9ab87b84c781885\": container with ID starting with 5e7f6a185272d4b84c9ebf1a5cc1b37e08739b5cea4acdb2c9ab87b84c781885 not found: ID does not exist" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.707583 5050 scope.go:117] "RemoveContainer" containerID="bfc69afe409ae068b883fcda54657526b6d9950f927cfd039dab64791065edac" Jan 31 05:47:21 crc kubenswrapper[5050]: E0131 05:47:21.707897 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc69afe409ae068b883fcda54657526b6d9950f927cfd039dab64791065edac\": container with ID starting with bfc69afe409ae068b883fcda54657526b6d9950f927cfd039dab64791065edac not found: ID does not exist" containerID="bfc69afe409ae068b883fcda54657526b6d9950f927cfd039dab64791065edac" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.707941 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc69afe409ae068b883fcda54657526b6d9950f927cfd039dab64791065edac"} err="failed to get container status \"bfc69afe409ae068b883fcda54657526b6d9950f927cfd039dab64791065edac\": rpc error: code = NotFound desc = could not find container \"bfc69afe409ae068b883fcda54657526b6d9950f927cfd039dab64791065edac\": container with ID starting with bfc69afe409ae068b883fcda54657526b6d9950f927cfd039dab64791065edac not found: ID does not exist" Jan 31 05:47:21 crc kubenswrapper[5050]: I0131 05:47:21.752735 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21074876-cc36-42cc-bb10-96ccb8de3d5f" path="/var/lib/kubelet/pods/21074876-cc36-42cc-bb10-96ccb8de3d5f/volumes" Jan 31 05:47:21 crc kubenswrapper[5050]: E0131 05:47:21.753217 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21074876_cc36_42cc_bb10_96ccb8de3d5f.slice\": RecentStats: unable to find data in memory cache]" Jan 31 05:47:39 crc kubenswrapper[5050]: I0131 05:47:39.018615 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:47:39 crc kubenswrapper[5050]: I0131 05:47:39.019514 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:47:39 crc kubenswrapper[5050]: I0131 05:47:39.019618 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:47:39 crc kubenswrapper[5050]: I0131 05:47:39.020518 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:47:39 crc kubenswrapper[5050]: I0131 05:47:39.020619 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" gracePeriod=600 Jan 31 05:47:39 crc kubenswrapper[5050]: E0131 05:47:39.156977 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:47:39 crc kubenswrapper[5050]: I0131 05:47:39.852667 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" exitCode=0 Jan 31 05:47:39 crc kubenswrapper[5050]: I0131 05:47:39.852746 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51"} Jan 31 05:47:39 crc kubenswrapper[5050]: I0131 05:47:39.852826 5050 scope.go:117] "RemoveContainer" containerID="a251b39bb9c1d28bca8640aed32573ece3622a90bd61ebf25455027ba42bf7e7" Jan 31 05:47:39 crc kubenswrapper[5050]: I0131 05:47:39.853856 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:47:39 crc kubenswrapper[5050]: E0131 05:47:39.854796 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:47:54 crc kubenswrapper[5050]: I0131 05:47:54.737413 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:47:54 crc kubenswrapper[5050]: E0131 05:47:54.738364 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:48:05 crc kubenswrapper[5050]: I0131 05:48:05.750845 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:48:05 crc kubenswrapper[5050]: E0131 05:48:05.751780 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:48:17 crc kubenswrapper[5050]: I0131 05:48:17.738358 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:48:17 crc kubenswrapper[5050]: E0131 05:48:17.739729 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:48:28 crc kubenswrapper[5050]: I0131 05:48:28.738187 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:48:28 crc kubenswrapper[5050]: E0131 05:48:28.738982 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:48:36 crc kubenswrapper[5050]: I0131 05:48:36.446512 5050 generic.go:334] "Generic (PLEG): container finished" podID="e8d56aec-90df-4428-a321-97fcf90ff7f6" containerID="5309ef94180656087b9747dde22bac2730c648771bb11bacbe3c5645bf77c34a" exitCode=0 Jan 31 05:48:36 crc kubenswrapper[5050]: I0131 05:48:36.446592 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" event={"ID":"e8d56aec-90df-4428-a321-97fcf90ff7f6","Type":"ContainerDied","Data":"5309ef94180656087b9747dde22bac2730c648771bb11bacbe3c5645bf77c34a"} Jan 31 05:48:37 crc kubenswrapper[5050]: I0131 05:48:37.925428 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:48:37 crc kubenswrapper[5050]: I0131 05:48:37.943388 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-ssh-key-openstack-edpm-ipam\") pod \"e8d56aec-90df-4428-a321-97fcf90ff7f6\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " Jan 31 05:48:37 crc kubenswrapper[5050]: I0131 05:48:37.943554 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-bootstrap-combined-ca-bundle\") pod \"e8d56aec-90df-4428-a321-97fcf90ff7f6\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " Jan 31 05:48:37 crc kubenswrapper[5050]: I0131 05:48:37.943644 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-inventory\") pod \"e8d56aec-90df-4428-a321-97fcf90ff7f6\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " Jan 31 05:48:37 crc kubenswrapper[5050]: I0131 05:48:37.943712 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2stbj\" (UniqueName: \"kubernetes.io/projected/e8d56aec-90df-4428-a321-97fcf90ff7f6-kube-api-access-2stbj\") pod \"e8d56aec-90df-4428-a321-97fcf90ff7f6\" (UID: \"e8d56aec-90df-4428-a321-97fcf90ff7f6\") " Jan 31 05:48:37 crc kubenswrapper[5050]: I0131 05:48:37.959243 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d56aec-90df-4428-a321-97fcf90ff7f6-kube-api-access-2stbj" (OuterVolumeSpecName: "kube-api-access-2stbj") pod "e8d56aec-90df-4428-a321-97fcf90ff7f6" (UID: "e8d56aec-90df-4428-a321-97fcf90ff7f6"). InnerVolumeSpecName "kube-api-access-2stbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:48:37 crc kubenswrapper[5050]: I0131 05:48:37.978106 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e8d56aec-90df-4428-a321-97fcf90ff7f6" (UID: "e8d56aec-90df-4428-a321-97fcf90ff7f6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:48:37 crc kubenswrapper[5050]: I0131 05:48:37.989603 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-inventory" (OuterVolumeSpecName: "inventory") pod "e8d56aec-90df-4428-a321-97fcf90ff7f6" (UID: "e8d56aec-90df-4428-a321-97fcf90ff7f6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:48:37 crc kubenswrapper[5050]: I0131 05:48:37.991810 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e8d56aec-90df-4428-a321-97fcf90ff7f6" (UID: "e8d56aec-90df-4428-a321-97fcf90ff7f6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.046187 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.046228 5050 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.046242 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8d56aec-90df-4428-a321-97fcf90ff7f6-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.046258 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2stbj\" (UniqueName: \"kubernetes.io/projected/e8d56aec-90df-4428-a321-97fcf90ff7f6-kube-api-access-2stbj\") on node \"crc\" DevicePath \"\"" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.472286 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" event={"ID":"e8d56aec-90df-4428-a321-97fcf90ff7f6","Type":"ContainerDied","Data":"49d618745009dd84d582ab9ac31474be8b4d30732cfeda0be814e3fa27555ea1"} Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.472338 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49d618745009dd84d582ab9ac31474be8b4d30732cfeda0be814e3fa27555ea1" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.472374 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.567438 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb"] Jan 31 05:48:38 crc kubenswrapper[5050]: E0131 05:48:38.567895 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d56aec-90df-4428-a321-97fcf90ff7f6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.567927 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d56aec-90df-4428-a321-97fcf90ff7f6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 05:48:38 crc kubenswrapper[5050]: E0131 05:48:38.567942 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21074876-cc36-42cc-bb10-96ccb8de3d5f" containerName="extract-content" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.567982 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="21074876-cc36-42cc-bb10-96ccb8de3d5f" containerName="extract-content" Jan 31 05:48:38 crc kubenswrapper[5050]: E0131 05:48:38.568032 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21074876-cc36-42cc-bb10-96ccb8de3d5f" containerName="extract-utilities" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.568043 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="21074876-cc36-42cc-bb10-96ccb8de3d5f" containerName="extract-utilities" Jan 31 05:48:38 crc kubenswrapper[5050]: E0131 05:48:38.568062 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21074876-cc36-42cc-bb10-96ccb8de3d5f" containerName="registry-server" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.568072 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="21074876-cc36-42cc-bb10-96ccb8de3d5f" containerName="registry-server" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.568281 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d56aec-90df-4428-a321-97fcf90ff7f6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.568321 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="21074876-cc36-42cc-bb10-96ccb8de3d5f" containerName="registry-server" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.569102 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.571767 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.571785 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.571910 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.572807 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.577527 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb"] Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.657902 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5rrd\" (UniqueName: \"kubernetes.io/projected/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-kube-api-access-g5rrd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s56nb\" (UID: \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.658233 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s56nb\" (UID: \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.658309 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s56nb\" (UID: \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.759742 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s56nb\" (UID: \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.759816 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s56nb\" (UID: \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.760073 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5rrd\" (UniqueName: \"kubernetes.io/projected/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-kube-api-access-g5rrd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s56nb\" (UID: \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.764589 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s56nb\" (UID: \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.766729 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s56nb\" (UID: \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.777120 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5rrd\" (UniqueName: \"kubernetes.io/projected/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-kube-api-access-g5rrd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s56nb\" (UID: \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:48:38 crc kubenswrapper[5050]: I0131 05:48:38.886271 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:48:39 crc kubenswrapper[5050]: I0131 05:48:39.471396 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb"] Jan 31 05:48:40 crc kubenswrapper[5050]: I0131 05:48:40.500835 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" event={"ID":"23bdfddb-2289-439d-bc8d-7185ba9e9d5f","Type":"ContainerStarted","Data":"f921ea92b022c7c7a79523c0b4904588e421c1a660b1dd4db274d92ca73a2217"} Jan 31 05:48:40 crc kubenswrapper[5050]: I0131 05:48:40.501757 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" event={"ID":"23bdfddb-2289-439d-bc8d-7185ba9e9d5f","Type":"ContainerStarted","Data":"9e2710f97f6e0c3f296de36189b40ee2711225c14882250919761e4c9d3eead6"} Jan 31 05:48:40 crc kubenswrapper[5050]: I0131 05:48:40.527614 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" podStartSLOduration=2.040864635 podStartE2EDuration="2.527590296s" podCreationTimestamp="2026-01-31 05:48:38 +0000 UTC" firstStartedPulling="2026-01-31 05:48:39.504317784 +0000 UTC m=+1644.553479380" lastFinishedPulling="2026-01-31 05:48:39.991043415 +0000 UTC m=+1645.040205041" observedRunningTime="2026-01-31 05:48:40.520240255 +0000 UTC m=+1645.569401891" watchObservedRunningTime="2026-01-31 05:48:40.527590296 +0000 UTC m=+1645.576751912" Jan 31 05:48:42 crc kubenswrapper[5050]: I0131 05:48:42.736852 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:48:42 crc kubenswrapper[5050]: E0131 05:48:42.738241 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:48:57 crc kubenswrapper[5050]: I0131 05:48:57.735996 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:48:57 crc kubenswrapper[5050]: E0131 05:48:57.737016 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:49:12 crc kubenswrapper[5050]: I0131 05:49:12.737514 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:49:12 crc kubenswrapper[5050]: E0131 05:49:12.738503 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:49:23 crc kubenswrapper[5050]: I0131 05:49:23.736666 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:49:23 crc kubenswrapper[5050]: E0131 05:49:23.737661 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:49:34 crc kubenswrapper[5050]: I0131 05:49:34.051536 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qj56b"] Jan 31 05:49:34 crc kubenswrapper[5050]: I0131 05:49:34.064705 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qj56b"] Jan 31 05:49:34 crc kubenswrapper[5050]: I0131 05:49:34.737228 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:49:34 crc kubenswrapper[5050]: E0131 05:49:34.737926 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:49:35 crc kubenswrapper[5050]: I0131 05:49:35.040996 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b6a7-account-create-update-xrks7"] Jan 31 05:49:35 crc kubenswrapper[5050]: I0131 05:49:35.053514 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-45f4-account-create-update-dnhlv"] Jan 31 05:49:35 crc kubenswrapper[5050]: I0131 05:49:35.063751 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b6a7-account-create-update-xrks7"] Jan 31 05:49:35 crc kubenswrapper[5050]: I0131 05:49:35.070598 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-45f4-account-create-update-dnhlv"] Jan 31 05:49:35 crc kubenswrapper[5050]: I0131 05:49:35.758592 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9664617a-0182-491b-b8e4-dc8f49991888" path="/var/lib/kubelet/pods/9664617a-0182-491b-b8e4-dc8f49991888/volumes" Jan 31 05:49:35 crc kubenswrapper[5050]: I0131 05:49:35.759527 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdee2500-7092-4156-998b-46952a2ba2d7" path="/var/lib/kubelet/pods/bdee2500-7092-4156-998b-46952a2ba2d7/volumes" Jan 31 05:49:35 crc kubenswrapper[5050]: I0131 05:49:35.760445 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d403f46d-7461-4c02-8788-f0d4fc1039eb" path="/var/lib/kubelet/pods/d403f46d-7461-4c02-8788-f0d4fc1039eb/volumes" Jan 31 05:49:36 crc kubenswrapper[5050]: I0131 05:49:36.030912 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-pnbwj"] Jan 31 05:49:36 crc kubenswrapper[5050]: I0131 05:49:36.037250 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-pnbwj"] Jan 31 05:49:37 crc kubenswrapper[5050]: I0131 05:49:37.766165 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355e9317-6a93-47e5-83c5-1c5eb6a4d9a7" path="/var/lib/kubelet/pods/355e9317-6a93-47e5-83c5-1c5eb6a4d9a7/volumes" Jan 31 05:49:40 crc kubenswrapper[5050]: I0131 05:49:40.041113 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-srmqm"] Jan 31 05:49:40 crc kubenswrapper[5050]: I0131 05:49:40.071448 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-srmqm"] Jan 31 05:49:40 crc kubenswrapper[5050]: I0131 05:49:40.097756 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1bdf-account-create-update-tqmmv"] Jan 31 05:49:40 crc kubenswrapper[5050]: I0131 05:49:40.113604 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1bdf-account-create-update-tqmmv"] Jan 31 05:49:41 crc kubenswrapper[5050]: I0131 05:49:41.750806 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ce1909-4188-4297-ba99-660320bdef11" path="/var/lib/kubelet/pods/49ce1909-4188-4297-ba99-660320bdef11/volumes" Jan 31 05:49:41 crc kubenswrapper[5050]: I0131 05:49:41.752460 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921e218b-b6a2-47ff-99e0-1b5199015acf" path="/var/lib/kubelet/pods/921e218b-b6a2-47ff-99e0-1b5199015acf/volumes" Jan 31 05:49:46 crc kubenswrapper[5050]: I0131 05:49:46.155217 5050 generic.go:334] "Generic (PLEG): container finished" podID="23bdfddb-2289-439d-bc8d-7185ba9e9d5f" containerID="f921ea92b022c7c7a79523c0b4904588e421c1a660b1dd4db274d92ca73a2217" exitCode=0 Jan 31 05:49:46 crc kubenswrapper[5050]: I0131 05:49:46.155325 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" event={"ID":"23bdfddb-2289-439d-bc8d-7185ba9e9d5f","Type":"ContainerDied","Data":"f921ea92b022c7c7a79523c0b4904588e421c1a660b1dd4db274d92ca73a2217"} Jan 31 05:49:46 crc kubenswrapper[5050]: I0131 05:49:46.736713 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:49:46 crc kubenswrapper[5050]: E0131 05:49:46.737440 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:49:47 crc kubenswrapper[5050]: I0131 05:49:47.956081 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.133532 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-ssh-key-openstack-edpm-ipam\") pod \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\" (UID: \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\") " Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.133642 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5rrd\" (UniqueName: \"kubernetes.io/projected/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-kube-api-access-g5rrd\") pod \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\" (UID: \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\") " Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.133768 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-inventory\") pod \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\" (UID: \"23bdfddb-2289-439d-bc8d-7185ba9e9d5f\") " Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.147375 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-kube-api-access-g5rrd" (OuterVolumeSpecName: "kube-api-access-g5rrd") pod "23bdfddb-2289-439d-bc8d-7185ba9e9d5f" (UID: "23bdfddb-2289-439d-bc8d-7185ba9e9d5f"). InnerVolumeSpecName "kube-api-access-g5rrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.179746 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" event={"ID":"23bdfddb-2289-439d-bc8d-7185ba9e9d5f","Type":"ContainerDied","Data":"9e2710f97f6e0c3f296de36189b40ee2711225c14882250919761e4c9d3eead6"} Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.179803 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e2710f97f6e0c3f296de36189b40ee2711225c14882250919761e4c9d3eead6" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.179888 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.183610 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-inventory" (OuterVolumeSpecName: "inventory") pod "23bdfddb-2289-439d-bc8d-7185ba9e9d5f" (UID: "23bdfddb-2289-439d-bc8d-7185ba9e9d5f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.183944 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23bdfddb-2289-439d-bc8d-7185ba9e9d5f" (UID: "23bdfddb-2289-439d-bc8d-7185ba9e9d5f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.236635 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.237407 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5rrd\" (UniqueName: \"kubernetes.io/projected/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-kube-api-access-g5rrd\") on node \"crc\" DevicePath \"\"" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.237433 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23bdfddb-2289-439d-bc8d-7185ba9e9d5f-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.282699 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq"] Jan 31 05:49:48 crc kubenswrapper[5050]: E0131 05:49:48.283393 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23bdfddb-2289-439d-bc8d-7185ba9e9d5f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.283426 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="23bdfddb-2289-439d-bc8d-7185ba9e9d5f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.283778 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="23bdfddb-2289-439d-bc8d-7185ba9e9d5f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.284925 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.293539 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq"] Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.441304 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f25rz\" (UniqueName: \"kubernetes.io/projected/83e1d789-6294-471b-b43c-5c0220fb84a6-kube-api-access-f25rz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq\" (UID: \"83e1d789-6294-471b-b43c-5c0220fb84a6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.441885 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83e1d789-6294-471b-b43c-5c0220fb84a6-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq\" (UID: \"83e1d789-6294-471b-b43c-5c0220fb84a6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.442797 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83e1d789-6294-471b-b43c-5c0220fb84a6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq\" (UID: \"83e1d789-6294-471b-b43c-5c0220fb84a6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.544781 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83e1d789-6294-471b-b43c-5c0220fb84a6-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq\" (UID: \"83e1d789-6294-471b-b43c-5c0220fb84a6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.544966 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83e1d789-6294-471b-b43c-5c0220fb84a6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq\" (UID: \"83e1d789-6294-471b-b43c-5c0220fb84a6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.545042 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f25rz\" (UniqueName: \"kubernetes.io/projected/83e1d789-6294-471b-b43c-5c0220fb84a6-kube-api-access-f25rz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq\" (UID: \"83e1d789-6294-471b-b43c-5c0220fb84a6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.549920 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83e1d789-6294-471b-b43c-5c0220fb84a6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq\" (UID: \"83e1d789-6294-471b-b43c-5c0220fb84a6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.555874 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83e1d789-6294-471b-b43c-5c0220fb84a6-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq\" (UID: \"83e1d789-6294-471b-b43c-5c0220fb84a6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.567801 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f25rz\" (UniqueName: \"kubernetes.io/projected/83e1d789-6294-471b-b43c-5c0220fb84a6-kube-api-access-f25rz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq\" (UID: \"83e1d789-6294-471b-b43c-5c0220fb84a6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:48 crc kubenswrapper[5050]: I0131 05:49:48.612281 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:49 crc kubenswrapper[5050]: I0131 05:49:49.170910 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq"] Jan 31 05:49:49 crc kubenswrapper[5050]: I0131 05:49:49.188395 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" event={"ID":"83e1d789-6294-471b-b43c-5c0220fb84a6","Type":"ContainerStarted","Data":"0d1db2d41293e971baec7c3c71c3f15e96a5d4ea7e3594bfca9a08e04ee0eb52"} Jan 31 05:49:51 crc kubenswrapper[5050]: I0131 05:49:51.210383 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" event={"ID":"83e1d789-6294-471b-b43c-5c0220fb84a6","Type":"ContainerStarted","Data":"92867a083d1c3e48c8d8f07ec7b4378653795d4fbf15a46c087203570b7f2010"} Jan 31 05:49:51 crc kubenswrapper[5050]: I0131 05:49:51.240926 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" podStartSLOduration=1.858315884 podStartE2EDuration="3.240888946s" podCreationTimestamp="2026-01-31 05:49:48 +0000 UTC" firstStartedPulling="2026-01-31 05:49:49.178168142 +0000 UTC m=+1714.227329738" lastFinishedPulling="2026-01-31 05:49:50.560741164 +0000 UTC m=+1715.609902800" observedRunningTime="2026-01-31 05:49:51.234122361 +0000 UTC m=+1716.283283997" watchObservedRunningTime="2026-01-31 05:49:51.240888946 +0000 UTC m=+1716.290050582" Jan 31 05:49:56 crc kubenswrapper[5050]: I0131 05:49:56.262375 5050 generic.go:334] "Generic (PLEG): container finished" podID="83e1d789-6294-471b-b43c-5c0220fb84a6" containerID="92867a083d1c3e48c8d8f07ec7b4378653795d4fbf15a46c087203570b7f2010" exitCode=0 Jan 31 05:49:56 crc kubenswrapper[5050]: I0131 05:49:56.262453 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" event={"ID":"83e1d789-6294-471b-b43c-5c0220fb84a6","Type":"ContainerDied","Data":"92867a083d1c3e48c8d8f07ec7b4378653795d4fbf15a46c087203570b7f2010"} Jan 31 05:49:57 crc kubenswrapper[5050]: I0131 05:49:57.734501 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:57 crc kubenswrapper[5050]: I0131 05:49:57.931589 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83e1d789-6294-471b-b43c-5c0220fb84a6-inventory\") pod \"83e1d789-6294-471b-b43c-5c0220fb84a6\" (UID: \"83e1d789-6294-471b-b43c-5c0220fb84a6\") " Jan 31 05:49:57 crc kubenswrapper[5050]: I0131 05:49:57.931986 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83e1d789-6294-471b-b43c-5c0220fb84a6-ssh-key-openstack-edpm-ipam\") pod \"83e1d789-6294-471b-b43c-5c0220fb84a6\" (UID: \"83e1d789-6294-471b-b43c-5c0220fb84a6\") " Jan 31 05:49:57 crc kubenswrapper[5050]: I0131 05:49:57.932029 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f25rz\" (UniqueName: \"kubernetes.io/projected/83e1d789-6294-471b-b43c-5c0220fb84a6-kube-api-access-f25rz\") pod \"83e1d789-6294-471b-b43c-5c0220fb84a6\" (UID: \"83e1d789-6294-471b-b43c-5c0220fb84a6\") " Jan 31 05:49:57 crc kubenswrapper[5050]: I0131 05:49:57.941285 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e1d789-6294-471b-b43c-5c0220fb84a6-kube-api-access-f25rz" (OuterVolumeSpecName: "kube-api-access-f25rz") pod "83e1d789-6294-471b-b43c-5c0220fb84a6" (UID: "83e1d789-6294-471b-b43c-5c0220fb84a6"). InnerVolumeSpecName "kube-api-access-f25rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:49:57 crc kubenswrapper[5050]: I0131 05:49:57.959754 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e1d789-6294-471b-b43c-5c0220fb84a6-inventory" (OuterVolumeSpecName: "inventory") pod "83e1d789-6294-471b-b43c-5c0220fb84a6" (UID: "83e1d789-6294-471b-b43c-5c0220fb84a6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:49:57 crc kubenswrapper[5050]: I0131 05:49:57.976213 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e1d789-6294-471b-b43c-5c0220fb84a6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "83e1d789-6294-471b-b43c-5c0220fb84a6" (UID: "83e1d789-6294-471b-b43c-5c0220fb84a6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.036563 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83e1d789-6294-471b-b43c-5c0220fb84a6-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.036607 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83e1d789-6294-471b-b43c-5c0220fb84a6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.036622 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f25rz\" (UniqueName: \"kubernetes.io/projected/83e1d789-6294-471b-b43c-5c0220fb84a6-kube-api-access-f25rz\") on node \"crc\" DevicePath \"\"" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.049155 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lw5z6"] Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.057606 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lw5z6"] Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.288905 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" event={"ID":"83e1d789-6294-471b-b43c-5c0220fb84a6","Type":"ContainerDied","Data":"0d1db2d41293e971baec7c3c71c3f15e96a5d4ea7e3594bfca9a08e04ee0eb52"} Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.289008 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d1db2d41293e971baec7c3c71c3f15e96a5d4ea7e3594bfca9a08e04ee0eb52" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.289075 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.378722 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx"] Jan 31 05:49:58 crc kubenswrapper[5050]: E0131 05:49:58.379249 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e1d789-6294-471b-b43c-5c0220fb84a6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.379273 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e1d789-6294-471b-b43c-5c0220fb84a6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.379515 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e1d789-6294-471b-b43c-5c0220fb84a6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.380357 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.382739 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.383302 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.383866 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.386562 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.391115 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx"] Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.545200 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0649affe-1489-4041-9156-d876c086ca3c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tbwkx\" (UID: \"0649affe-1489-4041-9156-d876c086ca3c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.545309 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9kz8\" (UniqueName: \"kubernetes.io/projected/0649affe-1489-4041-9156-d876c086ca3c-kube-api-access-v9kz8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tbwkx\" (UID: \"0649affe-1489-4041-9156-d876c086ca3c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.545473 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0649affe-1489-4041-9156-d876c086ca3c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tbwkx\" (UID: \"0649affe-1489-4041-9156-d876c086ca3c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.647055 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9kz8\" (UniqueName: \"kubernetes.io/projected/0649affe-1489-4041-9156-d876c086ca3c-kube-api-access-v9kz8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tbwkx\" (UID: \"0649affe-1489-4041-9156-d876c086ca3c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.647161 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0649affe-1489-4041-9156-d876c086ca3c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tbwkx\" (UID: \"0649affe-1489-4041-9156-d876c086ca3c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.647301 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0649affe-1489-4041-9156-d876c086ca3c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tbwkx\" (UID: \"0649affe-1489-4041-9156-d876c086ca3c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.655127 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0649affe-1489-4041-9156-d876c086ca3c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tbwkx\" (UID: \"0649affe-1489-4041-9156-d876c086ca3c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.657452 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0649affe-1489-4041-9156-d876c086ca3c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tbwkx\" (UID: \"0649affe-1489-4041-9156-d876c086ca3c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.669409 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9kz8\" (UniqueName: \"kubernetes.io/projected/0649affe-1489-4041-9156-d876c086ca3c-kube-api-access-v9kz8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tbwkx\" (UID: \"0649affe-1489-4041-9156-d876c086ca3c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:49:58 crc kubenswrapper[5050]: I0131 05:49:58.710046 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:49:59 crc kubenswrapper[5050]: I0131 05:49:59.299224 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx"] Jan 31 05:49:59 crc kubenswrapper[5050]: I0131 05:49:59.754723 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7" path="/var/lib/kubelet/pods/ffe87787-7eb7-436a-ad88-6fe2f8b1a6e7/volumes" Jan 31 05:50:00 crc kubenswrapper[5050]: I0131 05:50:00.311712 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" event={"ID":"0649affe-1489-4041-9156-d876c086ca3c","Type":"ContainerStarted","Data":"6846abaa2e4901ddd9f440b943f17549ca3b401ebf61099dfae993ebbc8c4d58"} Jan 31 05:50:00 crc kubenswrapper[5050]: I0131 05:50:00.311784 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" event={"ID":"0649affe-1489-4041-9156-d876c086ca3c","Type":"ContainerStarted","Data":"1f47e6501fd82ce7923100521d066064ea06cf4886b3131b7e0365ea5664d9fe"} Jan 31 05:50:00 crc kubenswrapper[5050]: I0131 05:50:00.334595 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" podStartSLOduration=1.883576733 podStartE2EDuration="2.334577808s" podCreationTimestamp="2026-01-31 05:49:58 +0000 UTC" firstStartedPulling="2026-01-31 05:49:59.298846656 +0000 UTC m=+1724.348008262" lastFinishedPulling="2026-01-31 05:49:59.749847701 +0000 UTC m=+1724.799009337" observedRunningTime="2026-01-31 05:50:00.329438037 +0000 UTC m=+1725.378599693" watchObservedRunningTime="2026-01-31 05:50:00.334577808 +0000 UTC m=+1725.383739404" Jan 31 05:50:01 crc kubenswrapper[5050]: I0131 05:50:01.737570 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:50:01 crc kubenswrapper[5050]: E0131 05:50:01.739906 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:50:03 crc kubenswrapper[5050]: I0131 05:50:03.049430 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-r5d7p"] Jan 31 05:50:03 crc kubenswrapper[5050]: I0131 05:50:03.066781 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jpmz7"] Jan 31 05:50:03 crc kubenswrapper[5050]: I0131 05:50:03.076778 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b1ec-account-create-update-p5wzj"] Jan 31 05:50:03 crc kubenswrapper[5050]: I0131 05:50:03.085352 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jpmz7"] Jan 31 05:50:03 crc kubenswrapper[5050]: I0131 05:50:03.093843 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b1ec-account-create-update-p5wzj"] Jan 31 05:50:03 crc kubenswrapper[5050]: I0131 05:50:03.102394 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-r5d7p"] Jan 31 05:50:03 crc kubenswrapper[5050]: I0131 05:50:03.747332 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dbc6186-a3de-418c-a213-3064164fc5bc" path="/var/lib/kubelet/pods/5dbc6186-a3de-418c-a213-3064164fc5bc/volumes" Jan 31 05:50:03 crc kubenswrapper[5050]: I0131 05:50:03.748101 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e81c149b-a523-42c5-8d6b-2eefde46201a" path="/var/lib/kubelet/pods/e81c149b-a523-42c5-8d6b-2eefde46201a/volumes" Jan 31 05:50:03 crc kubenswrapper[5050]: I0131 05:50:03.748697 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f25be051-f6a0-486d-a204-59b3f33af8c8" path="/var/lib/kubelet/pods/f25be051-f6a0-486d-a204-59b3f33af8c8/volumes" Jan 31 05:50:06 crc kubenswrapper[5050]: I0131 05:50:06.049236 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-b2zd6"] Jan 31 05:50:06 crc kubenswrapper[5050]: I0131 05:50:06.063063 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-1612-account-create-update-2thjx"] Jan 31 05:50:06 crc kubenswrapper[5050]: I0131 05:50:06.076915 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8fca-account-create-update-zgmfr"] Jan 31 05:50:06 crc kubenswrapper[5050]: I0131 05:50:06.086353 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-b2zd6"] Jan 31 05:50:06 crc kubenswrapper[5050]: I0131 05:50:06.094449 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lkhcw"] Jan 31 05:50:06 crc kubenswrapper[5050]: I0131 05:50:06.101073 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8fca-account-create-update-zgmfr"] Jan 31 05:50:06 crc kubenswrapper[5050]: I0131 05:50:06.108063 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lkhcw"] Jan 31 05:50:06 crc kubenswrapper[5050]: I0131 05:50:06.115657 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-1612-account-create-update-2thjx"] Jan 31 05:50:07 crc kubenswrapper[5050]: I0131 05:50:07.759262 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea6a094-f9f7-4626-9241-c23f2d2685d7" path="/var/lib/kubelet/pods/0ea6a094-f9f7-4626-9241-c23f2d2685d7/volumes" Jan 31 05:50:07 crc kubenswrapper[5050]: I0131 05:50:07.761097 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b522428-69eb-4f45-97c5-dc71f66011d6" path="/var/lib/kubelet/pods/2b522428-69eb-4f45-97c5-dc71f66011d6/volumes" Jan 31 05:50:07 crc kubenswrapper[5050]: I0131 05:50:07.762278 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d158e1ca-8b81-42bd-ad5e-69ae4017ad92" path="/var/lib/kubelet/pods/d158e1ca-8b81-42bd-ad5e-69ae4017ad92/volumes" Jan 31 05:50:07 crc kubenswrapper[5050]: I0131 05:50:07.763494 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67e4334-32bb-4e4f-9dad-8209b4e86495" path="/var/lib/kubelet/pods/e67e4334-32bb-4e4f-9dad-8209b4e86495/volumes" Jan 31 05:50:12 crc kubenswrapper[5050]: I0131 05:50:12.050741 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-2zxjh"] Jan 31 05:50:12 crc kubenswrapper[5050]: I0131 05:50:12.072014 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-2zxjh"] Jan 31 05:50:13 crc kubenswrapper[5050]: I0131 05:50:13.753565 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b21e3d-6de3-4be1-af37-d2fcf6d5521d" path="/var/lib/kubelet/pods/d1b21e3d-6de3-4be1-af37-d2fcf6d5521d/volumes" Jan 31 05:50:14 crc kubenswrapper[5050]: I0131 05:50:14.735987 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:50:14 crc kubenswrapper[5050]: E0131 05:50:14.736708 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.380788 5050 scope.go:117] "RemoveContainer" containerID="b54621e91c67e160066fff6dff4ebb21dfe08c5d2bbe064d9aa0deda62d36cd4" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.412166 5050 scope.go:117] "RemoveContainer" containerID="02ebf91af5cb0526a93b15c60b199e280aaa3fcc610a5c5b508788340985885d" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.444092 5050 scope.go:117] "RemoveContainer" containerID="48d5738832e2c7daea91d17c0d359a95473297b7633bedf9f38ad57d5e563b54" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.493001 5050 scope.go:117] "RemoveContainer" containerID="72779be3c8faf9a173d6771bbc40ff0fcc81153a7f765779ae504affd9b7a3ab" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.547901 5050 scope.go:117] "RemoveContainer" containerID="7c830414db5d5ba00380225862a45291d788bc737ceafb7b9eab8d09fa1f5def" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.601629 5050 scope.go:117] "RemoveContainer" containerID="1f4eb9288d063fdd62051f6ce09637538957b8ad83684396226abed53ed44020" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.665847 5050 scope.go:117] "RemoveContainer" containerID="4b1a2791792810a1090871ed4f547300dcd528be44fda0466d851f237839eabc" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.698473 5050 scope.go:117] "RemoveContainer" containerID="e85f0727ea097ff70f2df6f83d977da5977c96217cf83547b639b74c1b7fc0ca" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.735256 5050 scope.go:117] "RemoveContainer" containerID="0c029cccd8c791b2721eb0632b396ef508c2df08924ce64c3fbf53916cdec762" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.759380 5050 scope.go:117] "RemoveContainer" containerID="826dd6116d19a9f79122376e576b137f40289e2c61b367a14f9b3d42ca9f7ae9" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.789552 5050 scope.go:117] "RemoveContainer" containerID="2f9ca96ac33593cb6ba0adfde7996b2145361062c1a2eda834e08003c9ed6009" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.814090 5050 scope.go:117] "RemoveContainer" containerID="c91c3442b509e8c8bc15f846eb7d1b3db9f875d2949b1a849f0732c35c46c3fb" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.837438 5050 scope.go:117] "RemoveContainer" containerID="87e5a11dbf69d0073fb361ff2299b3c44b0d0a301c33e1d1ad00f6b9274ea382" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.870409 5050 scope.go:117] "RemoveContainer" containerID="1012deac64cfedf3ca4c8d4b3f5303f269d7bc20e6ca0325c5758dfc0067ac56" Jan 31 05:50:26 crc kubenswrapper[5050]: I0131 05:50:26.890614 5050 scope.go:117] "RemoveContainer" containerID="b11a0ea164b5e09c0832f6f93d56876a6b8e8e8ca063c6facecb91057049549e" Jan 31 05:50:27 crc kubenswrapper[5050]: I0131 05:50:27.736864 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:50:27 crc kubenswrapper[5050]: E0131 05:50:27.737335 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:50:37 crc kubenswrapper[5050]: I0131 05:50:37.690136 5050 generic.go:334] "Generic (PLEG): container finished" podID="0649affe-1489-4041-9156-d876c086ca3c" containerID="6846abaa2e4901ddd9f440b943f17549ca3b401ebf61099dfae993ebbc8c4d58" exitCode=0 Jan 31 05:50:37 crc kubenswrapper[5050]: I0131 05:50:37.690266 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" event={"ID":"0649affe-1489-4041-9156-d876c086ca3c","Type":"ContainerDied","Data":"6846abaa2e4901ddd9f440b943f17549ca3b401ebf61099dfae993ebbc8c4d58"} Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.158413 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.284771 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0649affe-1489-4041-9156-d876c086ca3c-ssh-key-openstack-edpm-ipam\") pod \"0649affe-1489-4041-9156-d876c086ca3c\" (UID: \"0649affe-1489-4041-9156-d876c086ca3c\") " Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.284874 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9kz8\" (UniqueName: \"kubernetes.io/projected/0649affe-1489-4041-9156-d876c086ca3c-kube-api-access-v9kz8\") pod \"0649affe-1489-4041-9156-d876c086ca3c\" (UID: \"0649affe-1489-4041-9156-d876c086ca3c\") " Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.286263 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0649affe-1489-4041-9156-d876c086ca3c-inventory\") pod \"0649affe-1489-4041-9156-d876c086ca3c\" (UID: \"0649affe-1489-4041-9156-d876c086ca3c\") " Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.290620 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0649affe-1489-4041-9156-d876c086ca3c-kube-api-access-v9kz8" (OuterVolumeSpecName: "kube-api-access-v9kz8") pod "0649affe-1489-4041-9156-d876c086ca3c" (UID: "0649affe-1489-4041-9156-d876c086ca3c"). InnerVolumeSpecName "kube-api-access-v9kz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.315575 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0649affe-1489-4041-9156-d876c086ca3c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0649affe-1489-4041-9156-d876c086ca3c" (UID: "0649affe-1489-4041-9156-d876c086ca3c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.317756 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0649affe-1489-4041-9156-d876c086ca3c-inventory" (OuterVolumeSpecName: "inventory") pod "0649affe-1489-4041-9156-d876c086ca3c" (UID: "0649affe-1489-4041-9156-d876c086ca3c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.388293 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0649affe-1489-4041-9156-d876c086ca3c-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.388335 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0649affe-1489-4041-9156-d876c086ca3c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.388351 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9kz8\" (UniqueName: \"kubernetes.io/projected/0649affe-1489-4041-9156-d876c086ca3c-kube-api-access-v9kz8\") on node \"crc\" DevicePath \"\"" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.740693 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:50:39 crc kubenswrapper[5050]: E0131 05:50:39.741130 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.748318 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.755540 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx" event={"ID":"0649affe-1489-4041-9156-d876c086ca3c","Type":"ContainerDied","Data":"1f47e6501fd82ce7923100521d066064ea06cf4886b3131b7e0365ea5664d9fe"} Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.755577 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f47e6501fd82ce7923100521d066064ea06cf4886b3131b7e0365ea5664d9fe" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.804480 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254"] Jan 31 05:50:39 crc kubenswrapper[5050]: E0131 05:50:39.804886 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0649affe-1489-4041-9156-d876c086ca3c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.804905 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0649affe-1489-4041-9156-d876c086ca3c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.805106 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0649affe-1489-4041-9156-d876c086ca3c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.805706 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.808815 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.809037 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.809415 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.809471 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.811056 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254"] Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.898747 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8790de5c-1f5c-4b1a-ba80-f8747c457975-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254\" (UID: \"8790de5c-1f5c-4b1a-ba80-f8747c457975\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.898813 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8790de5c-1f5c-4b1a-ba80-f8747c457975-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254\" (UID: \"8790de5c-1f5c-4b1a-ba80-f8747c457975\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:39 crc kubenswrapper[5050]: I0131 05:50:39.899147 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkwhc\" (UniqueName: \"kubernetes.io/projected/8790de5c-1f5c-4b1a-ba80-f8747c457975-kube-api-access-vkwhc\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254\" (UID: \"8790de5c-1f5c-4b1a-ba80-f8747c457975\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:40 crc kubenswrapper[5050]: I0131 05:50:40.000590 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkwhc\" (UniqueName: \"kubernetes.io/projected/8790de5c-1f5c-4b1a-ba80-f8747c457975-kube-api-access-vkwhc\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254\" (UID: \"8790de5c-1f5c-4b1a-ba80-f8747c457975\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:40 crc kubenswrapper[5050]: I0131 05:50:40.000867 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8790de5c-1f5c-4b1a-ba80-f8747c457975-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254\" (UID: \"8790de5c-1f5c-4b1a-ba80-f8747c457975\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:40 crc kubenswrapper[5050]: I0131 05:50:40.001002 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8790de5c-1f5c-4b1a-ba80-f8747c457975-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254\" (UID: \"8790de5c-1f5c-4b1a-ba80-f8747c457975\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:40 crc kubenswrapper[5050]: I0131 05:50:40.005769 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8790de5c-1f5c-4b1a-ba80-f8747c457975-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254\" (UID: \"8790de5c-1f5c-4b1a-ba80-f8747c457975\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:40 crc kubenswrapper[5050]: I0131 05:50:40.015067 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8790de5c-1f5c-4b1a-ba80-f8747c457975-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254\" (UID: \"8790de5c-1f5c-4b1a-ba80-f8747c457975\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:40 crc kubenswrapper[5050]: I0131 05:50:40.023231 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkwhc\" (UniqueName: \"kubernetes.io/projected/8790de5c-1f5c-4b1a-ba80-f8747c457975-kube-api-access-vkwhc\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254\" (UID: \"8790de5c-1f5c-4b1a-ba80-f8747c457975\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:40 crc kubenswrapper[5050]: I0131 05:50:40.126077 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:40 crc kubenswrapper[5050]: I0131 05:50:40.714753 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254"] Jan 31 05:50:40 crc kubenswrapper[5050]: I0131 05:50:40.726911 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:50:40 crc kubenswrapper[5050]: I0131 05:50:40.758473 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" event={"ID":"8790de5c-1f5c-4b1a-ba80-f8747c457975","Type":"ContainerStarted","Data":"dd119a7c87b7a3c8f7fb2f86abc7c46931bd8f86eb5f85f0a6c8582069b22c28"} Jan 31 05:50:41 crc kubenswrapper[5050]: I0131 05:50:41.769427 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" event={"ID":"8790de5c-1f5c-4b1a-ba80-f8747c457975","Type":"ContainerStarted","Data":"afc9601a0b72ad604bbb9cf72b065ee53134820d9d1b68b1035ae900b2654c22"} Jan 31 05:50:41 crc kubenswrapper[5050]: I0131 05:50:41.790623 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" podStartSLOduration=2.353463175 podStartE2EDuration="2.790604098s" podCreationTimestamp="2026-01-31 05:50:39 +0000 UTC" firstStartedPulling="2026-01-31 05:50:40.726643153 +0000 UTC m=+1765.775804759" lastFinishedPulling="2026-01-31 05:50:41.163784056 +0000 UTC m=+1766.212945682" observedRunningTime="2026-01-31 05:50:41.785341966 +0000 UTC m=+1766.834503572" watchObservedRunningTime="2026-01-31 05:50:41.790604098 +0000 UTC m=+1766.839765704" Jan 31 05:50:45 crc kubenswrapper[5050]: I0131 05:50:45.812047 5050 generic.go:334] "Generic (PLEG): container finished" podID="8790de5c-1f5c-4b1a-ba80-f8747c457975" containerID="afc9601a0b72ad604bbb9cf72b065ee53134820d9d1b68b1035ae900b2654c22" exitCode=0 Jan 31 05:50:45 crc kubenswrapper[5050]: I0131 05:50:45.812100 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" event={"ID":"8790de5c-1f5c-4b1a-ba80-f8747c457975","Type":"ContainerDied","Data":"afc9601a0b72ad604bbb9cf72b065ee53134820d9d1b68b1035ae900b2654c22"} Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.256607 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.357616 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkwhc\" (UniqueName: \"kubernetes.io/projected/8790de5c-1f5c-4b1a-ba80-f8747c457975-kube-api-access-vkwhc\") pod \"8790de5c-1f5c-4b1a-ba80-f8747c457975\" (UID: \"8790de5c-1f5c-4b1a-ba80-f8747c457975\") " Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.357711 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8790de5c-1f5c-4b1a-ba80-f8747c457975-ssh-key-openstack-edpm-ipam\") pod \"8790de5c-1f5c-4b1a-ba80-f8747c457975\" (UID: \"8790de5c-1f5c-4b1a-ba80-f8747c457975\") " Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.357787 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8790de5c-1f5c-4b1a-ba80-f8747c457975-inventory\") pod \"8790de5c-1f5c-4b1a-ba80-f8747c457975\" (UID: \"8790de5c-1f5c-4b1a-ba80-f8747c457975\") " Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.363825 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8790de5c-1f5c-4b1a-ba80-f8747c457975-kube-api-access-vkwhc" (OuterVolumeSpecName: "kube-api-access-vkwhc") pod "8790de5c-1f5c-4b1a-ba80-f8747c457975" (UID: "8790de5c-1f5c-4b1a-ba80-f8747c457975"). InnerVolumeSpecName "kube-api-access-vkwhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.383761 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8790de5c-1f5c-4b1a-ba80-f8747c457975-inventory" (OuterVolumeSpecName: "inventory") pod "8790de5c-1f5c-4b1a-ba80-f8747c457975" (UID: "8790de5c-1f5c-4b1a-ba80-f8747c457975"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.402471 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8790de5c-1f5c-4b1a-ba80-f8747c457975-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8790de5c-1f5c-4b1a-ba80-f8747c457975" (UID: "8790de5c-1f5c-4b1a-ba80-f8747c457975"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.461718 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkwhc\" (UniqueName: \"kubernetes.io/projected/8790de5c-1f5c-4b1a-ba80-f8747c457975-kube-api-access-vkwhc\") on node \"crc\" DevicePath \"\"" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.461770 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8790de5c-1f5c-4b1a-ba80-f8747c457975-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.461784 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8790de5c-1f5c-4b1a-ba80-f8747c457975-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.833543 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" event={"ID":"8790de5c-1f5c-4b1a-ba80-f8747c457975","Type":"ContainerDied","Data":"dd119a7c87b7a3c8f7fb2f86abc7c46931bd8f86eb5f85f0a6c8582069b22c28"} Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.833885 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd119a7c87b7a3c8f7fb2f86abc7c46931bd8f86eb5f85f0a6c8582069b22c28" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.833606 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.904452 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5"] Jan 31 05:50:47 crc kubenswrapper[5050]: E0131 05:50:47.904910 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8790de5c-1f5c-4b1a-ba80-f8747c457975" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.904937 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8790de5c-1f5c-4b1a-ba80-f8747c457975" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.905148 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8790de5c-1f5c-4b1a-ba80-f8747c457975" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.905876 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.908812 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.909048 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.909114 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.910790 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:50:47 crc kubenswrapper[5050]: I0131 05:50:47.916536 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5"] Jan 31 05:50:48 crc kubenswrapper[5050]: I0131 05:50:48.072275 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3eaa943-616f-4418-8969-77ad18f14208-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5\" (UID: \"f3eaa943-616f-4418-8969-77ad18f14208\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:50:48 crc kubenswrapper[5050]: I0131 05:50:48.072345 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3eaa943-616f-4418-8969-77ad18f14208-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5\" (UID: \"f3eaa943-616f-4418-8969-77ad18f14208\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:50:48 crc kubenswrapper[5050]: I0131 05:50:48.072727 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qcxq\" (UniqueName: \"kubernetes.io/projected/f3eaa943-616f-4418-8969-77ad18f14208-kube-api-access-6qcxq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5\" (UID: \"f3eaa943-616f-4418-8969-77ad18f14208\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:50:48 crc kubenswrapper[5050]: I0131 05:50:48.175266 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qcxq\" (UniqueName: \"kubernetes.io/projected/f3eaa943-616f-4418-8969-77ad18f14208-kube-api-access-6qcxq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5\" (UID: \"f3eaa943-616f-4418-8969-77ad18f14208\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:50:48 crc kubenswrapper[5050]: I0131 05:50:48.175484 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3eaa943-616f-4418-8969-77ad18f14208-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5\" (UID: \"f3eaa943-616f-4418-8969-77ad18f14208\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:50:48 crc kubenswrapper[5050]: I0131 05:50:48.175542 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3eaa943-616f-4418-8969-77ad18f14208-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5\" (UID: \"f3eaa943-616f-4418-8969-77ad18f14208\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:50:48 crc kubenswrapper[5050]: I0131 05:50:48.182798 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3eaa943-616f-4418-8969-77ad18f14208-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5\" (UID: \"f3eaa943-616f-4418-8969-77ad18f14208\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:50:48 crc kubenswrapper[5050]: I0131 05:50:48.183578 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3eaa943-616f-4418-8969-77ad18f14208-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5\" (UID: \"f3eaa943-616f-4418-8969-77ad18f14208\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:50:48 crc kubenswrapper[5050]: I0131 05:50:48.201876 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qcxq\" (UniqueName: \"kubernetes.io/projected/f3eaa943-616f-4418-8969-77ad18f14208-kube-api-access-6qcxq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5\" (UID: \"f3eaa943-616f-4418-8969-77ad18f14208\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:50:48 crc kubenswrapper[5050]: I0131 05:50:48.240650 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:50:48 crc kubenswrapper[5050]: I0131 05:50:48.819880 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5"] Jan 31 05:50:48 crc kubenswrapper[5050]: W0131 05:50:48.826355 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3eaa943_616f_4418_8969_77ad18f14208.slice/crio-e7b9a4c5f89a2caf16d93cee451636c8f1b697ef2dab9712088e95e3cc1e66e2 WatchSource:0}: Error finding container e7b9a4c5f89a2caf16d93cee451636c8f1b697ef2dab9712088e95e3cc1e66e2: Status 404 returned error can't find the container with id e7b9a4c5f89a2caf16d93cee451636c8f1b697ef2dab9712088e95e3cc1e66e2 Jan 31 05:50:48 crc kubenswrapper[5050]: I0131 05:50:48.847239 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" event={"ID":"f3eaa943-616f-4418-8969-77ad18f14208","Type":"ContainerStarted","Data":"e7b9a4c5f89a2caf16d93cee451636c8f1b697ef2dab9712088e95e3cc1e66e2"} Jan 31 05:50:49 crc kubenswrapper[5050]: I0131 05:50:49.863228 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" event={"ID":"f3eaa943-616f-4418-8969-77ad18f14208","Type":"ContainerStarted","Data":"81cc5cd56b1997a8ff3c38c45d9ae2d9329e45de5f7883a6d06f1a243ee03549"} Jan 31 05:50:49 crc kubenswrapper[5050]: I0131 05:50:49.893999 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" podStartSLOduration=2.4725080999999998 podStartE2EDuration="2.893980227s" podCreationTimestamp="2026-01-31 05:50:47 +0000 UTC" firstStartedPulling="2026-01-31 05:50:48.829864547 +0000 UTC m=+1773.879026183" lastFinishedPulling="2026-01-31 05:50:49.251336674 +0000 UTC m=+1774.300498310" observedRunningTime="2026-01-31 05:50:49.886615947 +0000 UTC m=+1774.935777613" watchObservedRunningTime="2026-01-31 05:50:49.893980227 +0000 UTC m=+1774.943141833" Jan 31 05:50:54 crc kubenswrapper[5050]: I0131 05:50:54.736704 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:50:54 crc kubenswrapper[5050]: E0131 05:50:54.737484 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:51:06 crc kubenswrapper[5050]: I0131 05:51:06.059506 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-66vdh"] Jan 31 05:51:06 crc kubenswrapper[5050]: I0131 05:51:06.085314 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-66vdh"] Jan 31 05:51:07 crc kubenswrapper[5050]: I0131 05:51:07.775400 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00cb797-dd0a-4e75-844f-45a7ddd15d45" path="/var/lib/kubelet/pods/d00cb797-dd0a-4e75-844f-45a7ddd15d45/volumes" Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.000347 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-98knd"] Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.003335 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.016638 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98knd"] Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.180658 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b92b092d-a0d1-4457-a390-2f3504f69cc5-catalog-content\") pod \"redhat-operators-98knd\" (UID: \"b92b092d-a0d1-4457-a390-2f3504f69cc5\") " pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.180716 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b92b092d-a0d1-4457-a390-2f3504f69cc5-utilities\") pod \"redhat-operators-98knd\" (UID: \"b92b092d-a0d1-4457-a390-2f3504f69cc5\") " pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.180990 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8gz8\" (UniqueName: \"kubernetes.io/projected/b92b092d-a0d1-4457-a390-2f3504f69cc5-kube-api-access-p8gz8\") pod \"redhat-operators-98knd\" (UID: \"b92b092d-a0d1-4457-a390-2f3504f69cc5\") " pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.284079 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b92b092d-a0d1-4457-a390-2f3504f69cc5-catalog-content\") pod \"redhat-operators-98knd\" (UID: \"b92b092d-a0d1-4457-a390-2f3504f69cc5\") " pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.284224 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b92b092d-a0d1-4457-a390-2f3504f69cc5-utilities\") pod \"redhat-operators-98knd\" (UID: \"b92b092d-a0d1-4457-a390-2f3504f69cc5\") " pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.284461 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8gz8\" (UniqueName: \"kubernetes.io/projected/b92b092d-a0d1-4457-a390-2f3504f69cc5-kube-api-access-p8gz8\") pod \"redhat-operators-98knd\" (UID: \"b92b092d-a0d1-4457-a390-2f3504f69cc5\") " pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.284630 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b92b092d-a0d1-4457-a390-2f3504f69cc5-utilities\") pod \"redhat-operators-98knd\" (UID: \"b92b092d-a0d1-4457-a390-2f3504f69cc5\") " pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.284897 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b92b092d-a0d1-4457-a390-2f3504f69cc5-catalog-content\") pod \"redhat-operators-98knd\" (UID: \"b92b092d-a0d1-4457-a390-2f3504f69cc5\") " pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.315531 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8gz8\" (UniqueName: \"kubernetes.io/projected/b92b092d-a0d1-4457-a390-2f3504f69cc5-kube-api-access-p8gz8\") pod \"redhat-operators-98knd\" (UID: \"b92b092d-a0d1-4457-a390-2f3504f69cc5\") " pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.341397 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:08 crc kubenswrapper[5050]: I0131 05:51:08.808755 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98knd"] Jan 31 05:51:08 crc kubenswrapper[5050]: W0131 05:51:08.811795 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb92b092d_a0d1_4457_a390_2f3504f69cc5.slice/crio-74b6772a949db3a1b39f543f8583768990b8d168f6523aab9395ad431ea1fed5 WatchSource:0}: Error finding container 74b6772a949db3a1b39f543f8583768990b8d168f6523aab9395ad431ea1fed5: Status 404 returned error can't find the container with id 74b6772a949db3a1b39f543f8583768990b8d168f6523aab9395ad431ea1fed5 Jan 31 05:51:09 crc kubenswrapper[5050]: I0131 05:51:09.053733 5050 generic.go:334] "Generic (PLEG): container finished" podID="b92b092d-a0d1-4457-a390-2f3504f69cc5" containerID="df171510f42160d910dc0313af816d64f33c9f3ecb2ff54e614edf069752b780" exitCode=0 Jan 31 05:51:09 crc kubenswrapper[5050]: I0131 05:51:09.053798 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98knd" event={"ID":"b92b092d-a0d1-4457-a390-2f3504f69cc5","Type":"ContainerDied","Data":"df171510f42160d910dc0313af816d64f33c9f3ecb2ff54e614edf069752b780"} Jan 31 05:51:09 crc kubenswrapper[5050]: I0131 05:51:09.053861 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98knd" event={"ID":"b92b092d-a0d1-4457-a390-2f3504f69cc5","Type":"ContainerStarted","Data":"74b6772a949db3a1b39f543f8583768990b8d168f6523aab9395ad431ea1fed5"} Jan 31 05:51:09 crc kubenswrapper[5050]: I0131 05:51:09.736390 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:51:09 crc kubenswrapper[5050]: E0131 05:51:09.736948 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:51:10 crc kubenswrapper[5050]: I0131 05:51:10.064251 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98knd" event={"ID":"b92b092d-a0d1-4457-a390-2f3504f69cc5","Type":"ContainerStarted","Data":"189f77b325f73751b0ac76aed2ff751add8f1e8e0688c7bc92c111c89846566f"} Jan 31 05:51:11 crc kubenswrapper[5050]: I0131 05:51:11.077733 5050 generic.go:334] "Generic (PLEG): container finished" podID="b92b092d-a0d1-4457-a390-2f3504f69cc5" containerID="189f77b325f73751b0ac76aed2ff751add8f1e8e0688c7bc92c111c89846566f" exitCode=0 Jan 31 05:51:11 crc kubenswrapper[5050]: I0131 05:51:11.077873 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98knd" event={"ID":"b92b092d-a0d1-4457-a390-2f3504f69cc5","Type":"ContainerDied","Data":"189f77b325f73751b0ac76aed2ff751add8f1e8e0688c7bc92c111c89846566f"} Jan 31 05:51:12 crc kubenswrapper[5050]: I0131 05:51:12.088145 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98knd" event={"ID":"b92b092d-a0d1-4457-a390-2f3504f69cc5","Type":"ContainerStarted","Data":"da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003"} Jan 31 05:51:12 crc kubenswrapper[5050]: I0131 05:51:12.116108 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-98knd" podStartSLOduration=2.655266709 podStartE2EDuration="5.116086509s" podCreationTimestamp="2026-01-31 05:51:07 +0000 UTC" firstStartedPulling="2026-01-31 05:51:09.055260936 +0000 UTC m=+1794.104422532" lastFinishedPulling="2026-01-31 05:51:11.516080696 +0000 UTC m=+1796.565242332" observedRunningTime="2026-01-31 05:51:12.105021197 +0000 UTC m=+1797.154182823" watchObservedRunningTime="2026-01-31 05:51:12.116086509 +0000 UTC m=+1797.165248115" Jan 31 05:51:18 crc kubenswrapper[5050]: I0131 05:51:18.342397 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:18 crc kubenswrapper[5050]: I0131 05:51:18.342934 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:18 crc kubenswrapper[5050]: I0131 05:51:18.417434 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:19 crc kubenswrapper[5050]: I0131 05:51:19.213741 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:19 crc kubenswrapper[5050]: I0131 05:51:19.297340 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98knd"] Jan 31 05:51:21 crc kubenswrapper[5050]: I0131 05:51:21.165251 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-98knd" podUID="b92b092d-a0d1-4457-a390-2f3504f69cc5" containerName="registry-server" containerID="cri-o://da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003" gracePeriod=2 Jan 31 05:51:21 crc kubenswrapper[5050]: I0131 05:51:21.648450 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:21 crc kubenswrapper[5050]: I0131 05:51:21.748759 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b92b092d-a0d1-4457-a390-2f3504f69cc5-catalog-content\") pod \"b92b092d-a0d1-4457-a390-2f3504f69cc5\" (UID: \"b92b092d-a0d1-4457-a390-2f3504f69cc5\") " Jan 31 05:51:21 crc kubenswrapper[5050]: I0131 05:51:21.748913 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b92b092d-a0d1-4457-a390-2f3504f69cc5-utilities\") pod \"b92b092d-a0d1-4457-a390-2f3504f69cc5\" (UID: \"b92b092d-a0d1-4457-a390-2f3504f69cc5\") " Jan 31 05:51:21 crc kubenswrapper[5050]: I0131 05:51:21.748991 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8gz8\" (UniqueName: \"kubernetes.io/projected/b92b092d-a0d1-4457-a390-2f3504f69cc5-kube-api-access-p8gz8\") pod \"b92b092d-a0d1-4457-a390-2f3504f69cc5\" (UID: \"b92b092d-a0d1-4457-a390-2f3504f69cc5\") " Jan 31 05:51:21 crc kubenswrapper[5050]: I0131 05:51:21.749924 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b92b092d-a0d1-4457-a390-2f3504f69cc5-utilities" (OuterVolumeSpecName: "utilities") pod "b92b092d-a0d1-4457-a390-2f3504f69cc5" (UID: "b92b092d-a0d1-4457-a390-2f3504f69cc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:51:21 crc kubenswrapper[5050]: I0131 05:51:21.755986 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92b092d-a0d1-4457-a390-2f3504f69cc5-kube-api-access-p8gz8" (OuterVolumeSpecName: "kube-api-access-p8gz8") pod "b92b092d-a0d1-4457-a390-2f3504f69cc5" (UID: "b92b092d-a0d1-4457-a390-2f3504f69cc5"). InnerVolumeSpecName "kube-api-access-p8gz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:51:21 crc kubenswrapper[5050]: I0131 05:51:21.853905 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b92b092d-a0d1-4457-a390-2f3504f69cc5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:51:21 crc kubenswrapper[5050]: I0131 05:51:21.853970 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8gz8\" (UniqueName: \"kubernetes.io/projected/b92b092d-a0d1-4457-a390-2f3504f69cc5-kube-api-access-p8gz8\") on node \"crc\" DevicePath \"\"" Jan 31 05:51:21 crc kubenswrapper[5050]: I0131 05:51:21.891567 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b92b092d-a0d1-4457-a390-2f3504f69cc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b92b092d-a0d1-4457-a390-2f3504f69cc5" (UID: "b92b092d-a0d1-4457-a390-2f3504f69cc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:51:21 crc kubenswrapper[5050]: I0131 05:51:21.956937 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b92b092d-a0d1-4457-a390-2f3504f69cc5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.177913 5050 generic.go:334] "Generic (PLEG): container finished" podID="b92b092d-a0d1-4457-a390-2f3504f69cc5" containerID="da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003" exitCode=0 Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.177985 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98knd" event={"ID":"b92b092d-a0d1-4457-a390-2f3504f69cc5","Type":"ContainerDied","Data":"da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003"} Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.178029 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98knd" Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.178054 5050 scope.go:117] "RemoveContainer" containerID="da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003" Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.178039 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98knd" event={"ID":"b92b092d-a0d1-4457-a390-2f3504f69cc5","Type":"ContainerDied","Data":"74b6772a949db3a1b39f543f8583768990b8d168f6523aab9395ad431ea1fed5"} Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.207158 5050 scope.go:117] "RemoveContainer" containerID="189f77b325f73751b0ac76aed2ff751add8f1e8e0688c7bc92c111c89846566f" Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.223593 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98knd"] Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.233375 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-98knd"] Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.246103 5050 scope.go:117] "RemoveContainer" containerID="df171510f42160d910dc0313af816d64f33c9f3ecb2ff54e614edf069752b780" Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.284287 5050 scope.go:117] "RemoveContainer" containerID="da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003" Jan 31 05:51:22 crc kubenswrapper[5050]: E0131 05:51:22.284735 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003\": container with ID starting with da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003 not found: ID does not exist" containerID="da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003" Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.284783 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003"} err="failed to get container status \"da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003\": rpc error: code = NotFound desc = could not find container \"da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003\": container with ID starting with da6ede5b5e9e1533247bf4b2e56e0261cb817bde6e4f3f84312fd29c5b603003 not found: ID does not exist" Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.284814 5050 scope.go:117] "RemoveContainer" containerID="189f77b325f73751b0ac76aed2ff751add8f1e8e0688c7bc92c111c89846566f" Jan 31 05:51:22 crc kubenswrapper[5050]: E0131 05:51:22.285262 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"189f77b325f73751b0ac76aed2ff751add8f1e8e0688c7bc92c111c89846566f\": container with ID starting with 189f77b325f73751b0ac76aed2ff751add8f1e8e0688c7bc92c111c89846566f not found: ID does not exist" containerID="189f77b325f73751b0ac76aed2ff751add8f1e8e0688c7bc92c111c89846566f" Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.285304 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189f77b325f73751b0ac76aed2ff751add8f1e8e0688c7bc92c111c89846566f"} err="failed to get container status \"189f77b325f73751b0ac76aed2ff751add8f1e8e0688c7bc92c111c89846566f\": rpc error: code = NotFound desc = could not find container \"189f77b325f73751b0ac76aed2ff751add8f1e8e0688c7bc92c111c89846566f\": container with ID starting with 189f77b325f73751b0ac76aed2ff751add8f1e8e0688c7bc92c111c89846566f not found: ID does not exist" Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.285336 5050 scope.go:117] "RemoveContainer" containerID="df171510f42160d910dc0313af816d64f33c9f3ecb2ff54e614edf069752b780" Jan 31 05:51:22 crc kubenswrapper[5050]: E0131 05:51:22.285644 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df171510f42160d910dc0313af816d64f33c9f3ecb2ff54e614edf069752b780\": container with ID starting with df171510f42160d910dc0313af816d64f33c9f3ecb2ff54e614edf069752b780 not found: ID does not exist" containerID="df171510f42160d910dc0313af816d64f33c9f3ecb2ff54e614edf069752b780" Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.285671 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df171510f42160d910dc0313af816d64f33c9f3ecb2ff54e614edf069752b780"} err="failed to get container status \"df171510f42160d910dc0313af816d64f33c9f3ecb2ff54e614edf069752b780\": rpc error: code = NotFound desc = could not find container \"df171510f42160d910dc0313af816d64f33c9f3ecb2ff54e614edf069752b780\": container with ID starting with df171510f42160d910dc0313af816d64f33c9f3ecb2ff54e614edf069752b780 not found: ID does not exist" Jan 31 05:51:22 crc kubenswrapper[5050]: I0131 05:51:22.737606 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:51:22 crc kubenswrapper[5050]: E0131 05:51:22.738145 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:51:23 crc kubenswrapper[5050]: I0131 05:51:23.750075 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92b092d-a0d1-4457-a390-2f3504f69cc5" path="/var/lib/kubelet/pods/b92b092d-a0d1-4457-a390-2f3504f69cc5/volumes" Jan 31 05:51:27 crc kubenswrapper[5050]: I0131 05:51:27.127692 5050 scope.go:117] "RemoveContainer" containerID="d9288ab80fd79ff9533f033bf8ce0811dfde0e6b08ee876c408455cb1593cbb0" Jan 31 05:51:33 crc kubenswrapper[5050]: I0131 05:51:33.736588 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:51:33 crc kubenswrapper[5050]: E0131 05:51:33.737704 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:51:36 crc kubenswrapper[5050]: I0131 05:51:36.052181 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-j4ptr"] Jan 31 05:51:36 crc kubenswrapper[5050]: I0131 05:51:36.068807 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-j4ptr"] Jan 31 05:51:37 crc kubenswrapper[5050]: I0131 05:51:37.751865 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a63fe16-7a6d-429f-bfd4-5dd5db95be12" path="/var/lib/kubelet/pods/5a63fe16-7a6d-429f-bfd4-5dd5db95be12/volumes" Jan 31 05:51:38 crc kubenswrapper[5050]: I0131 05:51:38.047156 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-88mvr"] Jan 31 05:51:38 crc kubenswrapper[5050]: I0131 05:51:38.063163 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-88mvr"] Jan 31 05:51:39 crc kubenswrapper[5050]: I0131 05:51:39.748784 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e9fb9c4-2743-4932-8605-f9be30344553" path="/var/lib/kubelet/pods/4e9fb9c4-2743-4932-8605-f9be30344553/volumes" Jan 31 05:51:40 crc kubenswrapper[5050]: I0131 05:51:40.040011 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5gld6"] Jan 31 05:51:40 crc kubenswrapper[5050]: I0131 05:51:40.049720 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5gld6"] Jan 31 05:51:40 crc kubenswrapper[5050]: I0131 05:51:40.373789 5050 generic.go:334] "Generic (PLEG): container finished" podID="f3eaa943-616f-4418-8969-77ad18f14208" containerID="81cc5cd56b1997a8ff3c38c45d9ae2d9329e45de5f7883a6d06f1a243ee03549" exitCode=0 Jan 31 05:51:40 crc kubenswrapper[5050]: I0131 05:51:40.373872 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" event={"ID":"f3eaa943-616f-4418-8969-77ad18f14208","Type":"ContainerDied","Data":"81cc5cd56b1997a8ff3c38c45d9ae2d9329e45de5f7883a6d06f1a243ee03549"} Jan 31 05:51:41 crc kubenswrapper[5050]: I0131 05:51:41.746825 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad1668e-92d0-48a9-9e34-aa95875ce641" path="/var/lib/kubelet/pods/dad1668e-92d0-48a9-9e34-aa95875ce641/volumes" Jan 31 05:51:41 crc kubenswrapper[5050]: I0131 05:51:41.877097 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:51:41 crc kubenswrapper[5050]: I0131 05:51:41.995098 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3eaa943-616f-4418-8969-77ad18f14208-ssh-key-openstack-edpm-ipam\") pod \"f3eaa943-616f-4418-8969-77ad18f14208\" (UID: \"f3eaa943-616f-4418-8969-77ad18f14208\") " Jan 31 05:51:41 crc kubenswrapper[5050]: I0131 05:51:41.995243 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3eaa943-616f-4418-8969-77ad18f14208-inventory\") pod \"f3eaa943-616f-4418-8969-77ad18f14208\" (UID: \"f3eaa943-616f-4418-8969-77ad18f14208\") " Jan 31 05:51:41 crc kubenswrapper[5050]: I0131 05:51:41.995391 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qcxq\" (UniqueName: \"kubernetes.io/projected/f3eaa943-616f-4418-8969-77ad18f14208-kube-api-access-6qcxq\") pod \"f3eaa943-616f-4418-8969-77ad18f14208\" (UID: \"f3eaa943-616f-4418-8969-77ad18f14208\") " Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.002351 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3eaa943-616f-4418-8969-77ad18f14208-kube-api-access-6qcxq" (OuterVolumeSpecName: "kube-api-access-6qcxq") pod "f3eaa943-616f-4418-8969-77ad18f14208" (UID: "f3eaa943-616f-4418-8969-77ad18f14208"). InnerVolumeSpecName "kube-api-access-6qcxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.026352 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3eaa943-616f-4418-8969-77ad18f14208-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f3eaa943-616f-4418-8969-77ad18f14208" (UID: "f3eaa943-616f-4418-8969-77ad18f14208"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.038277 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3eaa943-616f-4418-8969-77ad18f14208-inventory" (OuterVolumeSpecName: "inventory") pod "f3eaa943-616f-4418-8969-77ad18f14208" (UID: "f3eaa943-616f-4418-8969-77ad18f14208"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.098149 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3eaa943-616f-4418-8969-77ad18f14208-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.098201 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3eaa943-616f-4418-8969-77ad18f14208-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.098224 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qcxq\" (UniqueName: \"kubernetes.io/projected/f3eaa943-616f-4418-8969-77ad18f14208-kube-api-access-6qcxq\") on node \"crc\" DevicePath \"\"" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.394695 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" event={"ID":"f3eaa943-616f-4418-8969-77ad18f14208","Type":"ContainerDied","Data":"e7b9a4c5f89a2caf16d93cee451636c8f1b697ef2dab9712088e95e3cc1e66e2"} Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.394967 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b9a4c5f89a2caf16d93cee451636c8f1b697ef2dab9712088e95e3cc1e66e2" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.394992 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.476174 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zjmn5"] Jan 31 05:51:42 crc kubenswrapper[5050]: E0131 05:51:42.476501 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92b092d-a0d1-4457-a390-2f3504f69cc5" containerName="extract-content" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.476513 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92b092d-a0d1-4457-a390-2f3504f69cc5" containerName="extract-content" Jan 31 05:51:42 crc kubenswrapper[5050]: E0131 05:51:42.476525 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3eaa943-616f-4418-8969-77ad18f14208" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.476532 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3eaa943-616f-4418-8969-77ad18f14208" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:51:42 crc kubenswrapper[5050]: E0131 05:51:42.476542 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92b092d-a0d1-4457-a390-2f3504f69cc5" containerName="registry-server" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.476549 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92b092d-a0d1-4457-a390-2f3504f69cc5" containerName="registry-server" Jan 31 05:51:42 crc kubenswrapper[5050]: E0131 05:51:42.476565 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92b092d-a0d1-4457-a390-2f3504f69cc5" containerName="extract-utilities" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.476573 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92b092d-a0d1-4457-a390-2f3504f69cc5" containerName="extract-utilities" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.476779 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3eaa943-616f-4418-8969-77ad18f14208" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.476797 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92b092d-a0d1-4457-a390-2f3504f69cc5" containerName="registry-server" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.477532 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.479516 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.479858 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.480006 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.485602 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.490265 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zjmn5"] Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.508526 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/14ef1790-aaf0-4cc6-aef2-e79671f739ee-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zjmn5\" (UID: \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.508681 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14ef1790-aaf0-4cc6-aef2-e79671f739ee-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zjmn5\" (UID: \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.508827 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnd2g\" (UniqueName: \"kubernetes.io/projected/14ef1790-aaf0-4cc6-aef2-e79671f739ee-kube-api-access-xnd2g\") pod \"ssh-known-hosts-edpm-deployment-zjmn5\" (UID: \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.610026 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnd2g\" (UniqueName: \"kubernetes.io/projected/14ef1790-aaf0-4cc6-aef2-e79671f739ee-kube-api-access-xnd2g\") pod \"ssh-known-hosts-edpm-deployment-zjmn5\" (UID: \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.610146 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/14ef1790-aaf0-4cc6-aef2-e79671f739ee-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zjmn5\" (UID: \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.610205 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14ef1790-aaf0-4cc6-aef2-e79671f739ee-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zjmn5\" (UID: \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.616795 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14ef1790-aaf0-4cc6-aef2-e79671f739ee-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zjmn5\" (UID: \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.616849 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/14ef1790-aaf0-4cc6-aef2-e79671f739ee-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zjmn5\" (UID: \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.640430 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnd2g\" (UniqueName: \"kubernetes.io/projected/14ef1790-aaf0-4cc6-aef2-e79671f739ee-kube-api-access-xnd2g\") pod \"ssh-known-hosts-edpm-deployment-zjmn5\" (UID: \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:42 crc kubenswrapper[5050]: I0131 05:51:42.806794 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:43 crc kubenswrapper[5050]: I0131 05:51:43.372549 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zjmn5"] Jan 31 05:51:43 crc kubenswrapper[5050]: I0131 05:51:43.403723 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" event={"ID":"14ef1790-aaf0-4cc6-aef2-e79671f739ee","Type":"ContainerStarted","Data":"b0849508cc8cb5506ff8fd5dd3f3b700c1b475b1526729f9899d9c25b6365df0"} Jan 31 05:51:44 crc kubenswrapper[5050]: I0131 05:51:44.416063 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" event={"ID":"14ef1790-aaf0-4cc6-aef2-e79671f739ee","Type":"ContainerStarted","Data":"ca6f31697f0b00e96de8d760c597f53b368e458fb733e5b1c8b775280fc3badb"} Jan 31 05:51:44 crc kubenswrapper[5050]: I0131 05:51:44.460374 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" podStartSLOduration=1.9861751490000001 podStartE2EDuration="2.46035125s" podCreationTimestamp="2026-01-31 05:51:42 +0000 UTC" firstStartedPulling="2026-01-31 05:51:43.389415514 +0000 UTC m=+1828.438577110" lastFinishedPulling="2026-01-31 05:51:43.863591615 +0000 UTC m=+1828.912753211" observedRunningTime="2026-01-31 05:51:44.449443633 +0000 UTC m=+1829.498605269" watchObservedRunningTime="2026-01-31 05:51:44.46035125 +0000 UTC m=+1829.509512846" Jan 31 05:51:48 crc kubenswrapper[5050]: I0131 05:51:48.736691 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:51:48 crc kubenswrapper[5050]: E0131 05:51:48.737687 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:51:51 crc kubenswrapper[5050]: I0131 05:51:51.492252 5050 generic.go:334] "Generic (PLEG): container finished" podID="14ef1790-aaf0-4cc6-aef2-e79671f739ee" containerID="ca6f31697f0b00e96de8d760c597f53b368e458fb733e5b1c8b775280fc3badb" exitCode=0 Jan 31 05:51:51 crc kubenswrapper[5050]: I0131 05:51:51.492355 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" event={"ID":"14ef1790-aaf0-4cc6-aef2-e79671f739ee","Type":"ContainerDied","Data":"ca6f31697f0b00e96de8d760c597f53b368e458fb733e5b1c8b775280fc3badb"} Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.028195 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.207631 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14ef1790-aaf0-4cc6-aef2-e79671f739ee-ssh-key-openstack-edpm-ipam\") pod \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\" (UID: \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\") " Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.207695 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnd2g\" (UniqueName: \"kubernetes.io/projected/14ef1790-aaf0-4cc6-aef2-e79671f739ee-kube-api-access-xnd2g\") pod \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\" (UID: \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\") " Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.207836 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/14ef1790-aaf0-4cc6-aef2-e79671f739ee-inventory-0\") pod \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\" (UID: \"14ef1790-aaf0-4cc6-aef2-e79671f739ee\") " Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.215104 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ef1790-aaf0-4cc6-aef2-e79671f739ee-kube-api-access-xnd2g" (OuterVolumeSpecName: "kube-api-access-xnd2g") pod "14ef1790-aaf0-4cc6-aef2-e79671f739ee" (UID: "14ef1790-aaf0-4cc6-aef2-e79671f739ee"). InnerVolumeSpecName "kube-api-access-xnd2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.251925 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ef1790-aaf0-4cc6-aef2-e79671f739ee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "14ef1790-aaf0-4cc6-aef2-e79671f739ee" (UID: "14ef1790-aaf0-4cc6-aef2-e79671f739ee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.252200 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ef1790-aaf0-4cc6-aef2-e79671f739ee-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "14ef1790-aaf0-4cc6-aef2-e79671f739ee" (UID: "14ef1790-aaf0-4cc6-aef2-e79671f739ee"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.311649 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14ef1790-aaf0-4cc6-aef2-e79671f739ee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.311694 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnd2g\" (UniqueName: \"kubernetes.io/projected/14ef1790-aaf0-4cc6-aef2-e79671f739ee-kube-api-access-xnd2g\") on node \"crc\" DevicePath \"\"" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.311744 5050 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/14ef1790-aaf0-4cc6-aef2-e79671f739ee-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.515265 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" event={"ID":"14ef1790-aaf0-4cc6-aef2-e79671f739ee","Type":"ContainerDied","Data":"b0849508cc8cb5506ff8fd5dd3f3b700c1b475b1526729f9899d9c25b6365df0"} Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.515532 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0849508cc8cb5506ff8fd5dd3f3b700c1b475b1526729f9899d9c25b6365df0" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.515376 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zjmn5" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.607074 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf"] Jan 31 05:51:53 crc kubenswrapper[5050]: E0131 05:51:53.607429 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ef1790-aaf0-4cc6-aef2-e79671f739ee" containerName="ssh-known-hosts-edpm-deployment" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.607441 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ef1790-aaf0-4cc6-aef2-e79671f739ee" containerName="ssh-known-hosts-edpm-deployment" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.607638 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ef1790-aaf0-4cc6-aef2-e79671f739ee" containerName="ssh-known-hosts-edpm-deployment" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.608235 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.612935 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.612939 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.613191 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.613627 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.622552 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf"] Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.719687 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50b24e3e-7581-493b-8bd5-0dd7ff66858b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wq2kf\" (UID: \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.719811 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc68m\" (UniqueName: \"kubernetes.io/projected/50b24e3e-7581-493b-8bd5-0dd7ff66858b-kube-api-access-vc68m\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wq2kf\" (UID: \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.720229 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50b24e3e-7581-493b-8bd5-0dd7ff66858b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wq2kf\" (UID: \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.821796 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50b24e3e-7581-493b-8bd5-0dd7ff66858b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wq2kf\" (UID: \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.821862 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50b24e3e-7581-493b-8bd5-0dd7ff66858b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wq2kf\" (UID: \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.821924 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc68m\" (UniqueName: \"kubernetes.io/projected/50b24e3e-7581-493b-8bd5-0dd7ff66858b-kube-api-access-vc68m\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wq2kf\" (UID: \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.828424 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50b24e3e-7581-493b-8bd5-0dd7ff66858b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wq2kf\" (UID: \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.833868 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50b24e3e-7581-493b-8bd5-0dd7ff66858b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wq2kf\" (UID: \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.850864 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc68m\" (UniqueName: \"kubernetes.io/projected/50b24e3e-7581-493b-8bd5-0dd7ff66858b-kube-api-access-vc68m\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wq2kf\" (UID: \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:51:53 crc kubenswrapper[5050]: I0131 05:51:53.929200 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:51:54 crc kubenswrapper[5050]: I0131 05:51:54.545264 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf"] Jan 31 05:51:55 crc kubenswrapper[5050]: I0131 05:51:55.540215 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" event={"ID":"50b24e3e-7581-493b-8bd5-0dd7ff66858b","Type":"ContainerStarted","Data":"ac8ecc1a24f6905e99774fad0b0c7bf95c6aa701013ff744dc50a73d90de37e0"} Jan 31 05:51:55 crc kubenswrapper[5050]: I0131 05:51:55.540867 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" event={"ID":"50b24e3e-7581-493b-8bd5-0dd7ff66858b","Type":"ContainerStarted","Data":"f8bdf1c387ad615396781a1fb5e3d5643d0dd9e83d710249330fccecf3fc541e"} Jan 31 05:51:55 crc kubenswrapper[5050]: I0131 05:51:55.585776 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" podStartSLOduration=2.133944893 podStartE2EDuration="2.585749295s" podCreationTimestamp="2026-01-31 05:51:53 +0000 UTC" firstStartedPulling="2026-01-31 05:51:54.551468037 +0000 UTC m=+1839.600629633" lastFinishedPulling="2026-01-31 05:51:55.003272409 +0000 UTC m=+1840.052434035" observedRunningTime="2026-01-31 05:51:55.569184505 +0000 UTC m=+1840.618346141" watchObservedRunningTime="2026-01-31 05:51:55.585749295 +0000 UTC m=+1840.634910931" Jan 31 05:52:01 crc kubenswrapper[5050]: I0131 05:52:01.047896 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4kpps"] Jan 31 05:52:01 crc kubenswrapper[5050]: I0131 05:52:01.056139 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4kpps"] Jan 31 05:52:01 crc kubenswrapper[5050]: I0131 05:52:01.737043 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:52:01 crc kubenswrapper[5050]: E0131 05:52:01.737443 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:52:01 crc kubenswrapper[5050]: I0131 05:52:01.750182 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9f82d6b-5e75-48cd-b642-55d3fa91f520" path="/var/lib/kubelet/pods/c9f82d6b-5e75-48cd-b642-55d3fa91f520/volumes" Jan 31 05:52:03 crc kubenswrapper[5050]: I0131 05:52:03.616601 5050 generic.go:334] "Generic (PLEG): container finished" podID="50b24e3e-7581-493b-8bd5-0dd7ff66858b" containerID="ac8ecc1a24f6905e99774fad0b0c7bf95c6aa701013ff744dc50a73d90de37e0" exitCode=0 Jan 31 05:52:03 crc kubenswrapper[5050]: I0131 05:52:03.616721 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" event={"ID":"50b24e3e-7581-493b-8bd5-0dd7ff66858b","Type":"ContainerDied","Data":"ac8ecc1a24f6905e99774fad0b0c7bf95c6aa701013ff744dc50a73d90de37e0"} Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.024115 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.025301 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-zxbwv"] Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.033149 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-zxbwv"] Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.156438 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50b24e3e-7581-493b-8bd5-0dd7ff66858b-ssh-key-openstack-edpm-ipam\") pod \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\" (UID: \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\") " Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.156591 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc68m\" (UniqueName: \"kubernetes.io/projected/50b24e3e-7581-493b-8bd5-0dd7ff66858b-kube-api-access-vc68m\") pod \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\" (UID: \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\") " Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.156645 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50b24e3e-7581-493b-8bd5-0dd7ff66858b-inventory\") pod \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\" (UID: \"50b24e3e-7581-493b-8bd5-0dd7ff66858b\") " Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.166391 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b24e3e-7581-493b-8bd5-0dd7ff66858b-kube-api-access-vc68m" (OuterVolumeSpecName: "kube-api-access-vc68m") pod "50b24e3e-7581-493b-8bd5-0dd7ff66858b" (UID: "50b24e3e-7581-493b-8bd5-0dd7ff66858b"). InnerVolumeSpecName "kube-api-access-vc68m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.185282 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b24e3e-7581-493b-8bd5-0dd7ff66858b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "50b24e3e-7581-493b-8bd5-0dd7ff66858b" (UID: "50b24e3e-7581-493b-8bd5-0dd7ff66858b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.187410 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b24e3e-7581-493b-8bd5-0dd7ff66858b-inventory" (OuterVolumeSpecName: "inventory") pod "50b24e3e-7581-493b-8bd5-0dd7ff66858b" (UID: "50b24e3e-7581-493b-8bd5-0dd7ff66858b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.259204 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc68m\" (UniqueName: \"kubernetes.io/projected/50b24e3e-7581-493b-8bd5-0dd7ff66858b-kube-api-access-vc68m\") on node \"crc\" DevicePath \"\"" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.259412 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50b24e3e-7581-493b-8bd5-0dd7ff66858b-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.259517 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50b24e3e-7581-493b-8bd5-0dd7ff66858b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.644517 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" event={"ID":"50b24e3e-7581-493b-8bd5-0dd7ff66858b","Type":"ContainerDied","Data":"f8bdf1c387ad615396781a1fb5e3d5643d0dd9e83d710249330fccecf3fc541e"} Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.644567 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8bdf1c387ad615396781a1fb5e3d5643d0dd9e83d710249330fccecf3fc541e" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.644601 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.732710 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq"] Jan 31 05:52:05 crc kubenswrapper[5050]: E0131 05:52:05.733817 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b24e3e-7581-493b-8bd5-0dd7ff66858b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.733843 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b24e3e-7581-493b-8bd5-0dd7ff66858b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.734293 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b24e3e-7581-493b-8bd5-0dd7ff66858b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.735044 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.740519 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.740739 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.740933 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.741217 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.751338 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bde430b-fe13-43d9-b5e8-44c9c4953ad7" path="/var/lib/kubelet/pods/6bde430b-fe13-43d9-b5e8-44c9c4953ad7/volumes" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.752052 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq"] Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.902059 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9c1bb5a-fc74-4d4e-8991-4945e6517846-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq\" (UID: \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.902128 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9c1bb5a-fc74-4d4e-8991-4945e6517846-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq\" (UID: \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:05 crc kubenswrapper[5050]: I0131 05:52:05.902203 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnh6x\" (UniqueName: \"kubernetes.io/projected/d9c1bb5a-fc74-4d4e-8991-4945e6517846-kube-api-access-bnh6x\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq\" (UID: \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.004424 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9c1bb5a-fc74-4d4e-8991-4945e6517846-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq\" (UID: \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.004479 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9c1bb5a-fc74-4d4e-8991-4945e6517846-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq\" (UID: \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.004533 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnh6x\" (UniqueName: \"kubernetes.io/projected/d9c1bb5a-fc74-4d4e-8991-4945e6517846-kube-api-access-bnh6x\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq\" (UID: \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.011896 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9c1bb5a-fc74-4d4e-8991-4945e6517846-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq\" (UID: \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.012672 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9c1bb5a-fc74-4d4e-8991-4945e6517846-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq\" (UID: \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.025151 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnh6x\" (UniqueName: \"kubernetes.io/projected/d9c1bb5a-fc74-4d4e-8991-4945e6517846-kube-api-access-bnh6x\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq\" (UID: \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.037190 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-461c-account-create-update-6n75v"] Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.046792 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-70b5-account-create-update-kpzzv"] Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.054132 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.058565 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-461c-account-create-update-6n75v"] Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.066003 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-70b5-account-create-update-kpzzv"] Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.602200 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq"] Jan 31 05:52:06 crc kubenswrapper[5050]: W0131 05:52:06.602415 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9c1bb5a_fc74_4d4e_8991_4945e6517846.slice/crio-5ca5afff5c27484a485cb34b203aa4ebc87178e1f9056e395d57ee2fb6a55844 WatchSource:0}: Error finding container 5ca5afff5c27484a485cb34b203aa4ebc87178e1f9056e395d57ee2fb6a55844: Status 404 returned error can't find the container with id 5ca5afff5c27484a485cb34b203aa4ebc87178e1f9056e395d57ee2fb6a55844 Jan 31 05:52:06 crc kubenswrapper[5050]: I0131 05:52:06.654526 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" event={"ID":"d9c1bb5a-fc74-4d4e-8991-4945e6517846","Type":"ContainerStarted","Data":"5ca5afff5c27484a485cb34b203aa4ebc87178e1f9056e395d57ee2fb6a55844"} Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.044017 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-gnrqd"] Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.053189 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-gnrqd"] Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.066498 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f173-account-create-update-47tzz"] Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.072177 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2kk7d"] Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.081140 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f173-account-create-update-47tzz"] Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.090971 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2kk7d"] Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.665180 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" event={"ID":"d9c1bb5a-fc74-4d4e-8991-4945e6517846","Type":"ContainerStarted","Data":"3be545150fbdae10ff8219e4e1707b0375a2426a0895baa8afe636de57948123"} Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.700688 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" podStartSLOduration=2.292549147 podStartE2EDuration="2.700663421s" podCreationTimestamp="2026-01-31 05:52:05 +0000 UTC" firstStartedPulling="2026-01-31 05:52:06.605587028 +0000 UTC m=+1851.654748634" lastFinishedPulling="2026-01-31 05:52:07.013701282 +0000 UTC m=+1852.062862908" observedRunningTime="2026-01-31 05:52:07.692454358 +0000 UTC m=+1852.741615954" watchObservedRunningTime="2026-01-31 05:52:07.700663421 +0000 UTC m=+1852.749825017" Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.746688 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b998de1-8a4c-48c3-a3d5-4bf1309a8394" path="/var/lib/kubelet/pods/4b998de1-8a4c-48c3-a3d5-4bf1309a8394/volumes" Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.747670 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68226938-30ee-43b0-a15b-4ae65840c5b9" path="/var/lib/kubelet/pods/68226938-30ee-43b0-a15b-4ae65840c5b9/volumes" Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.749598 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b" path="/var/lib/kubelet/pods/7f1949d9-8ed7-4d51-91d0-82b8e77b6a4b/volumes" Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.750792 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac" path="/var/lib/kubelet/pods/d4921fdd-6ac8-41a1-bde2-d5ee0d3c61ac/volumes" Jan 31 05:52:07 crc kubenswrapper[5050]: I0131 05:52:07.752212 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88b9496-edac-4fbd-a33b-287b9289d20e" path="/var/lib/kubelet/pods/f88b9496-edac-4fbd-a33b-287b9289d20e/volumes" Jan 31 05:52:14 crc kubenswrapper[5050]: I0131 05:52:14.740902 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:52:14 crc kubenswrapper[5050]: E0131 05:52:14.742773 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:52:16 crc kubenswrapper[5050]: I0131 05:52:16.754530 5050 generic.go:334] "Generic (PLEG): container finished" podID="d9c1bb5a-fc74-4d4e-8991-4945e6517846" containerID="3be545150fbdae10ff8219e4e1707b0375a2426a0895baa8afe636de57948123" exitCode=0 Jan 31 05:52:16 crc kubenswrapper[5050]: I0131 05:52:16.754644 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" event={"ID":"d9c1bb5a-fc74-4d4e-8991-4945e6517846","Type":"ContainerDied","Data":"3be545150fbdae10ff8219e4e1707b0375a2426a0895baa8afe636de57948123"} Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.194650 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.347307 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9c1bb5a-fc74-4d4e-8991-4945e6517846-inventory\") pod \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\" (UID: \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\") " Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.347550 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9c1bb5a-fc74-4d4e-8991-4945e6517846-ssh-key-openstack-edpm-ipam\") pod \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\" (UID: \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\") " Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.347601 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnh6x\" (UniqueName: \"kubernetes.io/projected/d9c1bb5a-fc74-4d4e-8991-4945e6517846-kube-api-access-bnh6x\") pod \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\" (UID: \"d9c1bb5a-fc74-4d4e-8991-4945e6517846\") " Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.362919 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c1bb5a-fc74-4d4e-8991-4945e6517846-kube-api-access-bnh6x" (OuterVolumeSpecName: "kube-api-access-bnh6x") pod "d9c1bb5a-fc74-4d4e-8991-4945e6517846" (UID: "d9c1bb5a-fc74-4d4e-8991-4945e6517846"). InnerVolumeSpecName "kube-api-access-bnh6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.373463 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c1bb5a-fc74-4d4e-8991-4945e6517846-inventory" (OuterVolumeSpecName: "inventory") pod "d9c1bb5a-fc74-4d4e-8991-4945e6517846" (UID: "d9c1bb5a-fc74-4d4e-8991-4945e6517846"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.389826 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c1bb5a-fc74-4d4e-8991-4945e6517846-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d9c1bb5a-fc74-4d4e-8991-4945e6517846" (UID: "d9c1bb5a-fc74-4d4e-8991-4945e6517846"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.451397 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9c1bb5a-fc74-4d4e-8991-4945e6517846-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.451443 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9c1bb5a-fc74-4d4e-8991-4945e6517846-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.451464 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnh6x\" (UniqueName: \"kubernetes.io/projected/d9c1bb5a-fc74-4d4e-8991-4945e6517846-kube-api-access-bnh6x\") on node \"crc\" DevicePath \"\"" Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.776028 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" event={"ID":"d9c1bb5a-fc74-4d4e-8991-4945e6517846","Type":"ContainerDied","Data":"5ca5afff5c27484a485cb34b203aa4ebc87178e1f9056e395d57ee2fb6a55844"} Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.776604 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ca5afff5c27484a485cb34b203aa4ebc87178e1f9056e395d57ee2fb6a55844" Jan 31 05:52:18 crc kubenswrapper[5050]: I0131 05:52:18.776094 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.229984 5050 scope.go:117] "RemoveContainer" containerID="be6c91a02a20db3f7f9b32f718760bf5eeb8b65743d3510012c1b8f9a210951b" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.258330 5050 scope.go:117] "RemoveContainer" containerID="3a25c68e27eedefdb7e08d0d76e9215fd83d575585bd941fd270f40cbdb6d599" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.283393 5050 scope.go:117] "RemoveContainer" containerID="c61f0f85a2885bb6f42a3e2da51334a056a26e80fea47eba6e73d0bd19d0ba27" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.322864 5050 scope.go:117] "RemoveContainer" containerID="68f7f56ffae81e641128b37b068c46006d3048daab86f910905070b2f0b5ad97" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.381992 5050 scope.go:117] "RemoveContainer" containerID="85daf693e8813572df891be894023528332c01b59f46c8ac34c40beb6704cb7e" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.423230 5050 scope.go:117] "RemoveContainer" containerID="e590d65552cbcb107f36936ba5038cdf01ad09707d0da9c596907436b63a0ea3" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.444717 5050 scope.go:117] "RemoveContainer" containerID="ec73b421c7e6d9e0dee12a373da83b30f79ab985560b0ef30c0a4587e7612e2c" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.479658 5050 scope.go:117] "RemoveContainer" containerID="f0dfd2019c58d47e2f8eef513b6d5ae57f2c27fe821a65035ee85f99a4f2aa67" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.503818 5050 scope.go:117] "RemoveContainer" containerID="462f853a42a44766e43edd798d9c85e04dadd25e65dc04fcc0618e3d650bccc4" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.528208 5050 scope.go:117] "RemoveContainer" containerID="2dccedd016e1a35024d34d75407373433dc3eca4cdbd0f9ace251083167c1ce7" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.545197 5050 scope.go:117] "RemoveContainer" containerID="718ca33c6d5cd225bed41d6f32a0b4b9b751af550254b4bfbb3e0144acea1d74" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.573994 5050 scope.go:117] "RemoveContainer" containerID="3fc6e85ff9d452f7dfc90bdd9bcc093fd59d91bf378415ebd70ca8b91e1cae5c" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.595100 5050 scope.go:117] "RemoveContainer" containerID="ef227f791ec628da58eb838457ad29f30b3f0a8626d036fcc2a89375f3421898" Jan 31 05:52:27 crc kubenswrapper[5050]: I0131 05:52:27.743435 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:52:27 crc kubenswrapper[5050]: E0131 05:52:27.744314 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:52:38 crc kubenswrapper[5050]: I0131 05:52:38.737306 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:52:38 crc kubenswrapper[5050]: E0131 05:52:38.738810 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:52:49 crc kubenswrapper[5050]: I0131 05:52:49.052505 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sf47v"] Jan 31 05:52:49 crc kubenswrapper[5050]: I0131 05:52:49.063114 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sf47v"] Jan 31 05:52:49 crc kubenswrapper[5050]: I0131 05:52:49.737099 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:52:49 crc kubenswrapper[5050]: I0131 05:52:49.768489 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9480c5f7-4801-47d5-abe0-3a7281596b0b" path="/var/lib/kubelet/pods/9480c5f7-4801-47d5-abe0-3a7281596b0b/volumes" Jan 31 05:52:50 crc kubenswrapper[5050]: I0131 05:52:50.087275 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"a0ffe130a70cd443f5238f6453ca9259b4bf9e12e4a2045bca916cd6b95e0823"} Jan 31 05:53:11 crc kubenswrapper[5050]: I0131 05:53:11.065297 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-75rfw"] Jan 31 05:53:11 crc kubenswrapper[5050]: I0131 05:53:11.071641 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-75rfw"] Jan 31 05:53:11 crc kubenswrapper[5050]: I0131 05:53:11.754237 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c98739-a178-40c1-94b1-a60d20b26f6e" path="/var/lib/kubelet/pods/18c98739-a178-40c1-94b1-a60d20b26f6e/volumes" Jan 31 05:53:13 crc kubenswrapper[5050]: I0131 05:53:13.049748 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vthl5"] Jan 31 05:53:13 crc kubenswrapper[5050]: I0131 05:53:13.067657 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vthl5"] Jan 31 05:53:13 crc kubenswrapper[5050]: I0131 05:53:13.748354 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdc6156e-bdae-4cf2-a051-9c884bd592ca" path="/var/lib/kubelet/pods/cdc6156e-bdae-4cf2-a051-9c884bd592ca/volumes" Jan 31 05:53:27 crc kubenswrapper[5050]: I0131 05:53:27.772915 5050 scope.go:117] "RemoveContainer" containerID="e2b15e320862bc3eeedeba075a95e6746665e06573dc864e9f3deba129317bf7" Jan 31 05:53:27 crc kubenswrapper[5050]: I0131 05:53:27.822208 5050 scope.go:117] "RemoveContainer" containerID="49214b9ef6f69861069ab4a0a5079412baa8c62594fb4894dc80cdd5f68ec5c2" Jan 31 05:53:27 crc kubenswrapper[5050]: I0131 05:53:27.888748 5050 scope.go:117] "RemoveContainer" containerID="5cbc06671b74733ef7a42e01de9dd9e35080c56aecac86fe7864f1ae5d931790" Jan 31 05:53:56 crc kubenswrapper[5050]: I0131 05:53:56.059734 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-blxjh"] Jan 31 05:53:56 crc kubenswrapper[5050]: I0131 05:53:56.075364 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-blxjh"] Jan 31 05:53:57 crc kubenswrapper[5050]: I0131 05:53:57.756870 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c" path="/var/lib/kubelet/pods/fb2ce56a-f67a-4dbc-9bc5-e5ba11a1843c/volumes" Jan 31 05:54:27 crc kubenswrapper[5050]: I0131 05:54:27.992863 5050 scope.go:117] "RemoveContainer" containerID="371421fc890f818595e0cb15e8837631374bb17d76260f0f01a3c8f2a2f4956a" Jan 31 05:55:09 crc kubenswrapper[5050]: I0131 05:55:09.018419 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:55:09 crc kubenswrapper[5050]: I0131 05:55:09.019356 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:55:39 crc kubenswrapper[5050]: I0131 05:55:39.017874 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:55:39 crc kubenswrapper[5050]: I0131 05:55:39.018662 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:56:09 crc kubenswrapper[5050]: I0131 05:56:09.018195 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:56:09 crc kubenswrapper[5050]: I0131 05:56:09.018987 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:56:09 crc kubenswrapper[5050]: I0131 05:56:09.019057 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:56:09 crc kubenswrapper[5050]: I0131 05:56:09.020118 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0ffe130a70cd443f5238f6453ca9259b4bf9e12e4a2045bca916cd6b95e0823"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:56:09 crc kubenswrapper[5050]: I0131 05:56:09.020215 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://a0ffe130a70cd443f5238f6453ca9259b4bf9e12e4a2045bca916cd6b95e0823" gracePeriod=600 Jan 31 05:56:09 crc kubenswrapper[5050]: I0131 05:56:09.314796 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="a0ffe130a70cd443f5238f6453ca9259b4bf9e12e4a2045bca916cd6b95e0823" exitCode=0 Jan 31 05:56:09 crc kubenswrapper[5050]: I0131 05:56:09.315050 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"a0ffe130a70cd443f5238f6453ca9259b4bf9e12e4a2045bca916cd6b95e0823"} Jan 31 05:56:09 crc kubenswrapper[5050]: I0131 05:56:09.315209 5050 scope.go:117] "RemoveContainer" containerID="c4d146ad7bfefcc120edf574977ee047b926defccbb2c9143b9988ccf1dced51" Jan 31 05:56:10 crc kubenswrapper[5050]: I0131 05:56:10.337018 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239"} Jan 31 05:56:17 crc kubenswrapper[5050]: E0131 05:56:17.020762 5050 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.70:54786->38.102.83.70:44501: read tcp 38.102.83.70:54786->38.102.83.70:44501: read: connection reset by peer Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.207616 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zw4x5"] Jan 31 05:56:23 crc kubenswrapper[5050]: E0131 05:56:23.208909 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c1bb5a-fc74-4d4e-8991-4945e6517846" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.208934 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c1bb5a-fc74-4d4e-8991-4945e6517846" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.209258 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c1bb5a-fc74-4d4e-8991-4945e6517846" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.211447 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.232771 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zw4x5"] Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.400878 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgxms\" (UniqueName: \"kubernetes.io/projected/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-kube-api-access-tgxms\") pod \"certified-operators-zw4x5\" (UID: \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\") " pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.401094 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-catalog-content\") pod \"certified-operators-zw4x5\" (UID: \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\") " pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.401133 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-utilities\") pod \"certified-operators-zw4x5\" (UID: \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\") " pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.502539 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-catalog-content\") pod \"certified-operators-zw4x5\" (UID: \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\") " pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.502623 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-utilities\") pod \"certified-operators-zw4x5\" (UID: \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\") " pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.502703 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgxms\" (UniqueName: \"kubernetes.io/projected/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-kube-api-access-tgxms\") pod \"certified-operators-zw4x5\" (UID: \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\") " pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.503223 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-catalog-content\") pod \"certified-operators-zw4x5\" (UID: \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\") " pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.503285 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-utilities\") pod \"certified-operators-zw4x5\" (UID: \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\") " pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.529550 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgxms\" (UniqueName: \"kubernetes.io/projected/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-kube-api-access-tgxms\") pod \"certified-operators-zw4x5\" (UID: \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\") " pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:23 crc kubenswrapper[5050]: I0131 05:56:23.551529 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:24 crc kubenswrapper[5050]: I0131 05:56:24.065343 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zw4x5"] Jan 31 05:56:24 crc kubenswrapper[5050]: I0131 05:56:24.473360 5050 generic.go:334] "Generic (PLEG): container finished" podID="37d6bf8e-799a-4933-8d1a-bbb736d4e79c" containerID="e1ba150c419c262b1242b39831da10ffd3b22e3bc5a219b0a3578c8491dfd2d2" exitCode=0 Jan 31 05:56:24 crc kubenswrapper[5050]: I0131 05:56:24.473413 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zw4x5" event={"ID":"37d6bf8e-799a-4933-8d1a-bbb736d4e79c","Type":"ContainerDied","Data":"e1ba150c419c262b1242b39831da10ffd3b22e3bc5a219b0a3578c8491dfd2d2"} Jan 31 05:56:24 crc kubenswrapper[5050]: I0131 05:56:24.473759 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zw4x5" event={"ID":"37d6bf8e-799a-4933-8d1a-bbb736d4e79c","Type":"ContainerStarted","Data":"758a7adbaeead84b0df9fb4f7721502aa04030d0a63fb08d587bc17280ad7894"} Jan 31 05:56:24 crc kubenswrapper[5050]: I0131 05:56:24.475397 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:56:25 crc kubenswrapper[5050]: I0131 05:56:25.485033 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zw4x5" event={"ID":"37d6bf8e-799a-4933-8d1a-bbb736d4e79c","Type":"ContainerStarted","Data":"519af8b426bd3f9d453ecdc31c9a2da8ff57d88704615af2c05d8744efaa136e"} Jan 31 05:56:26 crc kubenswrapper[5050]: I0131 05:56:26.510334 5050 generic.go:334] "Generic (PLEG): container finished" podID="37d6bf8e-799a-4933-8d1a-bbb736d4e79c" containerID="519af8b426bd3f9d453ecdc31c9a2da8ff57d88704615af2c05d8744efaa136e" exitCode=0 Jan 31 05:56:26 crc kubenswrapper[5050]: I0131 05:56:26.510456 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zw4x5" event={"ID":"37d6bf8e-799a-4933-8d1a-bbb736d4e79c","Type":"ContainerDied","Data":"519af8b426bd3f9d453ecdc31c9a2da8ff57d88704615af2c05d8744efaa136e"} Jan 31 05:56:27 crc kubenswrapper[5050]: I0131 05:56:27.525786 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zw4x5" event={"ID":"37d6bf8e-799a-4933-8d1a-bbb736d4e79c","Type":"ContainerStarted","Data":"4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1"} Jan 31 05:56:27 crc kubenswrapper[5050]: I0131 05:56:27.556441 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zw4x5" podStartSLOduration=2.102344538 podStartE2EDuration="4.556420813s" podCreationTimestamp="2026-01-31 05:56:23 +0000 UTC" firstStartedPulling="2026-01-31 05:56:24.474894382 +0000 UTC m=+2109.524055988" lastFinishedPulling="2026-01-31 05:56:26.928970677 +0000 UTC m=+2111.978132263" observedRunningTime="2026-01-31 05:56:27.550295647 +0000 UTC m=+2112.599457283" watchObservedRunningTime="2026-01-31 05:56:27.556420813 +0000 UTC m=+2112.605582409" Jan 31 05:56:33 crc kubenswrapper[5050]: I0131 05:56:33.551833 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:33 crc kubenswrapper[5050]: I0131 05:56:33.552606 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:33 crc kubenswrapper[5050]: I0131 05:56:33.633247 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:33 crc kubenswrapper[5050]: I0131 05:56:33.722627 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:33 crc kubenswrapper[5050]: I0131 05:56:33.886203 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zw4x5"] Jan 31 05:56:35 crc kubenswrapper[5050]: I0131 05:56:35.609536 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zw4x5" podUID="37d6bf8e-799a-4933-8d1a-bbb736d4e79c" containerName="registry-server" containerID="cri-o://4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1" gracePeriod=2 Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.240014 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.398215 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-utilities\") pod \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\" (UID: \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\") " Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.398319 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgxms\" (UniqueName: \"kubernetes.io/projected/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-kube-api-access-tgxms\") pod \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\" (UID: \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\") " Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.398370 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-catalog-content\") pod \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\" (UID: \"37d6bf8e-799a-4933-8d1a-bbb736d4e79c\") " Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.400302 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-utilities" (OuterVolumeSpecName: "utilities") pod "37d6bf8e-799a-4933-8d1a-bbb736d4e79c" (UID: "37d6bf8e-799a-4933-8d1a-bbb736d4e79c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.407678 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-kube-api-access-tgxms" (OuterVolumeSpecName: "kube-api-access-tgxms") pod "37d6bf8e-799a-4933-8d1a-bbb736d4e79c" (UID: "37d6bf8e-799a-4933-8d1a-bbb736d4e79c"). InnerVolumeSpecName "kube-api-access-tgxms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.482925 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37d6bf8e-799a-4933-8d1a-bbb736d4e79c" (UID: "37d6bf8e-799a-4933-8d1a-bbb736d4e79c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.501141 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.501173 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgxms\" (UniqueName: \"kubernetes.io/projected/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-kube-api-access-tgxms\") on node \"crc\" DevicePath \"\"" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.501188 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d6bf8e-799a-4933-8d1a-bbb736d4e79c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.621068 5050 generic.go:334] "Generic (PLEG): container finished" podID="37d6bf8e-799a-4933-8d1a-bbb736d4e79c" containerID="4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1" exitCode=0 Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.621133 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zw4x5" event={"ID":"37d6bf8e-799a-4933-8d1a-bbb736d4e79c","Type":"ContainerDied","Data":"4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1"} Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.621176 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zw4x5" event={"ID":"37d6bf8e-799a-4933-8d1a-bbb736d4e79c","Type":"ContainerDied","Data":"758a7adbaeead84b0df9fb4f7721502aa04030d0a63fb08d587bc17280ad7894"} Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.621222 5050 scope.go:117] "RemoveContainer" containerID="4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.621396 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zw4x5" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.661549 5050 scope.go:117] "RemoveContainer" containerID="519af8b426bd3f9d453ecdc31c9a2da8ff57d88704615af2c05d8744efaa136e" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.676193 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zw4x5"] Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.691059 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zw4x5"] Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.693835 5050 scope.go:117] "RemoveContainer" containerID="e1ba150c419c262b1242b39831da10ffd3b22e3bc5a219b0a3578c8491dfd2d2" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.735622 5050 scope.go:117] "RemoveContainer" containerID="4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1" Jan 31 05:56:36 crc kubenswrapper[5050]: E0131 05:56:36.736470 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1\": container with ID starting with 4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1 not found: ID does not exist" containerID="4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.736530 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1"} err="failed to get container status \"4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1\": rpc error: code = NotFound desc = could not find container \"4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1\": container with ID starting with 4baa6e2aa77ef466e43018e5a0b863b08ccf4f23efdb61530a1e634872fb9cf1 not found: ID does not exist" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.736570 5050 scope.go:117] "RemoveContainer" containerID="519af8b426bd3f9d453ecdc31c9a2da8ff57d88704615af2c05d8744efaa136e" Jan 31 05:56:36 crc kubenswrapper[5050]: E0131 05:56:36.737090 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"519af8b426bd3f9d453ecdc31c9a2da8ff57d88704615af2c05d8744efaa136e\": container with ID starting with 519af8b426bd3f9d453ecdc31c9a2da8ff57d88704615af2c05d8744efaa136e not found: ID does not exist" containerID="519af8b426bd3f9d453ecdc31c9a2da8ff57d88704615af2c05d8744efaa136e" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.737147 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519af8b426bd3f9d453ecdc31c9a2da8ff57d88704615af2c05d8744efaa136e"} err="failed to get container status \"519af8b426bd3f9d453ecdc31c9a2da8ff57d88704615af2c05d8744efaa136e\": rpc error: code = NotFound desc = could not find container \"519af8b426bd3f9d453ecdc31c9a2da8ff57d88704615af2c05d8744efaa136e\": container with ID starting with 519af8b426bd3f9d453ecdc31c9a2da8ff57d88704615af2c05d8744efaa136e not found: ID does not exist" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.737178 5050 scope.go:117] "RemoveContainer" containerID="e1ba150c419c262b1242b39831da10ffd3b22e3bc5a219b0a3578c8491dfd2d2" Jan 31 05:56:36 crc kubenswrapper[5050]: E0131 05:56:36.737679 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ba150c419c262b1242b39831da10ffd3b22e3bc5a219b0a3578c8491dfd2d2\": container with ID starting with e1ba150c419c262b1242b39831da10ffd3b22e3bc5a219b0a3578c8491dfd2d2 not found: ID does not exist" containerID="e1ba150c419c262b1242b39831da10ffd3b22e3bc5a219b0a3578c8491dfd2d2" Jan 31 05:56:36 crc kubenswrapper[5050]: I0131 05:56:36.737760 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ba150c419c262b1242b39831da10ffd3b22e3bc5a219b0a3578c8491dfd2d2"} err="failed to get container status \"e1ba150c419c262b1242b39831da10ffd3b22e3bc5a219b0a3578c8491dfd2d2\": rpc error: code = NotFound desc = could not find container \"e1ba150c419c262b1242b39831da10ffd3b22e3bc5a219b0a3578c8491dfd2d2\": container with ID starting with e1ba150c419c262b1242b39831da10ffd3b22e3bc5a219b0a3578c8491dfd2d2 not found: ID does not exist" Jan 31 05:56:37 crc kubenswrapper[5050]: I0131 05:56:37.753612 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d6bf8e-799a-4933-8d1a-bbb736d4e79c" path="/var/lib/kubelet/pods/37d6bf8e-799a-4933-8d1a-bbb736d4e79c/volumes" Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.748709 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.750017 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.760051 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.769756 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zjmn5"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.777666 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.784253 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsf58"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.793154 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.800482 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.805877 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.810844 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zjmn5"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.815839 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wq2kf"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.820969 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spzs8"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.828789 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.831214 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5kvhq"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.837301 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tbwkx"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.842916 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.859186 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s56nb"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.862618 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-69254"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.869579 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4pkg5"] Jan 31 05:56:51 crc kubenswrapper[5050]: I0131 05:56:51.874902 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d4vpq"] Jan 31 05:56:53 crc kubenswrapper[5050]: I0131 05:56:53.755447 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0649affe-1489-4041-9156-d876c086ca3c" path="/var/lib/kubelet/pods/0649affe-1489-4041-9156-d876c086ca3c/volumes" Jan 31 05:56:53 crc kubenswrapper[5050]: I0131 05:56:53.756567 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ef1790-aaf0-4cc6-aef2-e79671f739ee" path="/var/lib/kubelet/pods/14ef1790-aaf0-4cc6-aef2-e79671f739ee/volumes" Jan 31 05:56:53 crc kubenswrapper[5050]: I0131 05:56:53.757343 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23bdfddb-2289-439d-bc8d-7185ba9e9d5f" path="/var/lib/kubelet/pods/23bdfddb-2289-439d-bc8d-7185ba9e9d5f/volumes" Jan 31 05:56:53 crc kubenswrapper[5050]: I0131 05:56:53.758131 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b24e3e-7581-493b-8bd5-0dd7ff66858b" path="/var/lib/kubelet/pods/50b24e3e-7581-493b-8bd5-0dd7ff66858b/volumes" Jan 31 05:56:53 crc kubenswrapper[5050]: I0131 05:56:53.760092 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e1d789-6294-471b-b43c-5c0220fb84a6" path="/var/lib/kubelet/pods/83e1d789-6294-471b-b43c-5c0220fb84a6/volumes" Jan 31 05:56:53 crc kubenswrapper[5050]: I0131 05:56:53.760815 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8790de5c-1f5c-4b1a-ba80-f8747c457975" path="/var/lib/kubelet/pods/8790de5c-1f5c-4b1a-ba80-f8747c457975/volumes" Jan 31 05:56:53 crc kubenswrapper[5050]: I0131 05:56:53.761424 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="963e7005-964e-4472-9a34-0407ee972f9f" path="/var/lib/kubelet/pods/963e7005-964e-4472-9a34-0407ee972f9f/volumes" Jan 31 05:56:53 crc kubenswrapper[5050]: I0131 05:56:53.762613 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c1bb5a-fc74-4d4e-8991-4945e6517846" path="/var/lib/kubelet/pods/d9c1bb5a-fc74-4d4e-8991-4945e6517846/volumes" Jan 31 05:56:53 crc kubenswrapper[5050]: I0131 05:56:53.763268 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d56aec-90df-4428-a321-97fcf90ff7f6" path="/var/lib/kubelet/pods/e8d56aec-90df-4428-a321-97fcf90ff7f6/volumes" Jan 31 05:56:53 crc kubenswrapper[5050]: I0131 05:56:53.763907 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3eaa943-616f-4418-8969-77ad18f14208" path="/var/lib/kubelet/pods/f3eaa943-616f-4418-8969-77ad18f14208/volumes" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.518663 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c"] Jan 31 05:56:57 crc kubenswrapper[5050]: E0131 05:56:57.519577 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d6bf8e-799a-4933-8d1a-bbb736d4e79c" containerName="extract-content" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.519593 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d6bf8e-799a-4933-8d1a-bbb736d4e79c" containerName="extract-content" Jan 31 05:56:57 crc kubenswrapper[5050]: E0131 05:56:57.519616 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d6bf8e-799a-4933-8d1a-bbb736d4e79c" containerName="extract-utilities" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.519625 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d6bf8e-799a-4933-8d1a-bbb736d4e79c" containerName="extract-utilities" Jan 31 05:56:57 crc kubenswrapper[5050]: E0131 05:56:57.519639 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d6bf8e-799a-4933-8d1a-bbb736d4e79c" containerName="registry-server" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.519647 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d6bf8e-799a-4933-8d1a-bbb736d4e79c" containerName="registry-server" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.519856 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d6bf8e-799a-4933-8d1a-bbb736d4e79c" containerName="registry-server" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.520701 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.524048 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.524290 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.524447 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.524600 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.524759 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.537562 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c"] Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.543746 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.543808 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.543888 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.544025 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pchgv\" (UniqueName: \"kubernetes.io/projected/8eabec3b-eead-4a45-9836-2a4985f344fc-kube-api-access-pchgv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.544100 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.645755 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.645870 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pchgv\" (UniqueName: \"kubernetes.io/projected/8eabec3b-eead-4a45-9836-2a4985f344fc-kube-api-access-pchgv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.645932 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.646024 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.646050 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.653002 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.653627 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.654211 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.665091 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.669940 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pchgv\" (UniqueName: \"kubernetes.io/projected/8eabec3b-eead-4a45-9836-2a4985f344fc-kube-api-access-pchgv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:57 crc kubenswrapper[5050]: I0131 05:56:57.865060 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:56:58 crc kubenswrapper[5050]: I0131 05:56:58.440185 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c"] Jan 31 05:56:58 crc kubenswrapper[5050]: I0131 05:56:58.823937 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" event={"ID":"8eabec3b-eead-4a45-9836-2a4985f344fc","Type":"ContainerStarted","Data":"d8e5c8f7b5766ebe82a36186dbc1c1982e58caf904eb62ea7bc1a7fb4366a002"} Jan 31 05:56:59 crc kubenswrapper[5050]: I0131 05:56:59.833043 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" event={"ID":"8eabec3b-eead-4a45-9836-2a4985f344fc","Type":"ContainerStarted","Data":"6b941f53186c0384e0a1d2bbd6dfa3205d05413da318971ec590cf2808494148"} Jan 31 05:56:59 crc kubenswrapper[5050]: I0131 05:56:59.849998 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" podStartSLOduration=2.360574445 podStartE2EDuration="2.849979699s" podCreationTimestamp="2026-01-31 05:56:57 +0000 UTC" firstStartedPulling="2026-01-31 05:56:58.447945198 +0000 UTC m=+2143.497106804" lastFinishedPulling="2026-01-31 05:56:58.937350432 +0000 UTC m=+2143.986512058" observedRunningTime="2026-01-31 05:56:59.847431439 +0000 UTC m=+2144.896593045" watchObservedRunningTime="2026-01-31 05:56:59.849979699 +0000 UTC m=+2144.899141295" Jan 31 05:57:11 crc kubenswrapper[5050]: I0131 05:57:11.959504 5050 generic.go:334] "Generic (PLEG): container finished" podID="8eabec3b-eead-4a45-9836-2a4985f344fc" containerID="6b941f53186c0384e0a1d2bbd6dfa3205d05413da318971ec590cf2808494148" exitCode=0 Jan 31 05:57:11 crc kubenswrapper[5050]: I0131 05:57:11.959583 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" event={"ID":"8eabec3b-eead-4a45-9836-2a4985f344fc","Type":"ContainerDied","Data":"6b941f53186c0384e0a1d2bbd6dfa3205d05413da318971ec590cf2808494148"} Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.546550 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.595921 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pchgv\" (UniqueName: \"kubernetes.io/projected/8eabec3b-eead-4a45-9836-2a4985f344fc-kube-api-access-pchgv\") pod \"8eabec3b-eead-4a45-9836-2a4985f344fc\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.596078 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-ssh-key-openstack-edpm-ipam\") pod \"8eabec3b-eead-4a45-9836-2a4985f344fc\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.596218 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-repo-setup-combined-ca-bundle\") pod \"8eabec3b-eead-4a45-9836-2a4985f344fc\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.596280 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-ceph\") pod \"8eabec3b-eead-4a45-9836-2a4985f344fc\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.596369 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-inventory\") pod \"8eabec3b-eead-4a45-9836-2a4985f344fc\" (UID: \"8eabec3b-eead-4a45-9836-2a4985f344fc\") " Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.601432 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eabec3b-eead-4a45-9836-2a4985f344fc-kube-api-access-pchgv" (OuterVolumeSpecName: "kube-api-access-pchgv") pod "8eabec3b-eead-4a45-9836-2a4985f344fc" (UID: "8eabec3b-eead-4a45-9836-2a4985f344fc"). InnerVolumeSpecName "kube-api-access-pchgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.603604 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8eabec3b-eead-4a45-9836-2a4985f344fc" (UID: "8eabec3b-eead-4a45-9836-2a4985f344fc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.604129 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-ceph" (OuterVolumeSpecName: "ceph") pod "8eabec3b-eead-4a45-9836-2a4985f344fc" (UID: "8eabec3b-eead-4a45-9836-2a4985f344fc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.637064 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-inventory" (OuterVolumeSpecName: "inventory") pod "8eabec3b-eead-4a45-9836-2a4985f344fc" (UID: "8eabec3b-eead-4a45-9836-2a4985f344fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.643502 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8eabec3b-eead-4a45-9836-2a4985f344fc" (UID: "8eabec3b-eead-4a45-9836-2a4985f344fc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.698414 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.698460 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.698485 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pchgv\" (UniqueName: \"kubernetes.io/projected/8eabec3b-eead-4a45-9836-2a4985f344fc-kube-api-access-pchgv\") on node \"crc\" DevicePath \"\"" Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.698507 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.698527 5050 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eabec3b-eead-4a45-9836-2a4985f344fc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.981066 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" event={"ID":"8eabec3b-eead-4a45-9836-2a4985f344fc","Type":"ContainerDied","Data":"d8e5c8f7b5766ebe82a36186dbc1c1982e58caf904eb62ea7bc1a7fb4366a002"} Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.981111 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e5c8f7b5766ebe82a36186dbc1c1982e58caf904eb62ea7bc1a7fb4366a002" Jan 31 05:57:13 crc kubenswrapper[5050]: I0131 05:57:13.981167 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.076404 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976"] Jan 31 05:57:14 crc kubenswrapper[5050]: E0131 05:57:14.076840 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eabec3b-eead-4a45-9836-2a4985f344fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.076864 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eabec3b-eead-4a45-9836-2a4985f344fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.077110 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eabec3b-eead-4a45-9836-2a4985f344fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.077821 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.088559 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.088633 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.088735 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.088787 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.088931 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.097796 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976"] Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.118283 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.118564 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.118680 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.119007 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.119077 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t7mh\" (UniqueName: \"kubernetes.io/projected/ac858115-58fc-4cae-be54-8fc858f07268-kube-api-access-8t7mh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.221175 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.221292 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t7mh\" (UniqueName: \"kubernetes.io/projected/ac858115-58fc-4cae-be54-8fc858f07268-kube-api-access-8t7mh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.221406 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.221610 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.221701 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.226259 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.227414 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.230524 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.235274 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.237201 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t7mh\" (UniqueName: \"kubernetes.io/projected/ac858115-58fc-4cae-be54-8fc858f07268-kube-api-access-8t7mh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m2976\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:14 crc kubenswrapper[5050]: I0131 05:57:14.411055 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:57:15 crc kubenswrapper[5050]: I0131 05:57:15.031668 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976"] Jan 31 05:57:15 crc kubenswrapper[5050]: W0131 05:57:15.037912 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac858115_58fc_4cae_be54_8fc858f07268.slice/crio-c5796a9bd07e25fb42d1ac71f48bcfbecddaee86a04db438caad5230b193eab3 WatchSource:0}: Error finding container c5796a9bd07e25fb42d1ac71f48bcfbecddaee86a04db438caad5230b193eab3: Status 404 returned error can't find the container with id c5796a9bd07e25fb42d1ac71f48bcfbecddaee86a04db438caad5230b193eab3 Jan 31 05:57:15 crc kubenswrapper[5050]: I0131 05:57:15.997768 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" event={"ID":"ac858115-58fc-4cae-be54-8fc858f07268","Type":"ContainerStarted","Data":"a3cf576561ad940fa03094dcbe2dcc7448b798f55768022f03dc791561650feb"} Jan 31 05:57:15 crc kubenswrapper[5050]: I0131 05:57:15.998203 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" event={"ID":"ac858115-58fc-4cae-be54-8fc858f07268","Type":"ContainerStarted","Data":"c5796a9bd07e25fb42d1ac71f48bcfbecddaee86a04db438caad5230b193eab3"} Jan 31 05:57:16 crc kubenswrapper[5050]: I0131 05:57:16.027294 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" podStartSLOduration=1.616951408 podStartE2EDuration="2.027266648s" podCreationTimestamp="2026-01-31 05:57:14 +0000 UTC" firstStartedPulling="2026-01-31 05:57:15.042163636 +0000 UTC m=+2160.091325242" lastFinishedPulling="2026-01-31 05:57:15.452478886 +0000 UTC m=+2160.501640482" observedRunningTime="2026-01-31 05:57:16.014469909 +0000 UTC m=+2161.063631515" watchObservedRunningTime="2026-01-31 05:57:16.027266648 +0000 UTC m=+2161.076428284" Jan 31 05:57:28 crc kubenswrapper[5050]: I0131 05:57:28.119772 5050 scope.go:117] "RemoveContainer" containerID="ab86aa23eb5ea5decf4d2c346996e50ef008f5edd7cc55bfd2d0f878f69c482b" Jan 31 05:57:28 crc kubenswrapper[5050]: I0131 05:57:28.150846 5050 scope.go:117] "RemoveContainer" containerID="81cc5cd56b1997a8ff3c38c45d9ae2d9329e45de5f7883a6d06f1a243ee03549" Jan 31 05:57:28 crc kubenswrapper[5050]: I0131 05:57:28.208606 5050 scope.go:117] "RemoveContainer" containerID="6846abaa2e4901ddd9f440b943f17549ca3b401ebf61099dfae993ebbc8c4d58" Jan 31 05:57:28 crc kubenswrapper[5050]: I0131 05:57:28.279513 5050 scope.go:117] "RemoveContainer" containerID="f921ea92b022c7c7a79523c0b4904588e421c1a660b1dd4db274d92ca73a2217" Jan 31 05:57:28 crc kubenswrapper[5050]: I0131 05:57:28.338314 5050 scope.go:117] "RemoveContainer" containerID="afc9601a0b72ad604bbb9cf72b065ee53134820d9d1b68b1035ae900b2654c22" Jan 31 05:57:28 crc kubenswrapper[5050]: I0131 05:57:28.373634 5050 scope.go:117] "RemoveContainer" containerID="92867a083d1c3e48c8d8f07ec7b4378653795d4fbf15a46c087203570b7f2010" Jan 31 05:57:28 crc kubenswrapper[5050]: I0131 05:57:28.406908 5050 scope.go:117] "RemoveContainer" containerID="5309ef94180656087b9747dde22bac2730c648771bb11bacbe3c5645bf77c34a" Jan 31 05:58:09 crc kubenswrapper[5050]: I0131 05:58:09.018334 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:58:09 crc kubenswrapper[5050]: I0131 05:58:09.019249 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:58:28 crc kubenswrapper[5050]: I0131 05:58:28.575760 5050 scope.go:117] "RemoveContainer" containerID="ac8ecc1a24f6905e99774fad0b0c7bf95c6aa701013ff744dc50a73d90de37e0" Jan 31 05:58:28 crc kubenswrapper[5050]: I0131 05:58:28.627196 5050 scope.go:117] "RemoveContainer" containerID="ca6f31697f0b00e96de8d760c597f53b368e458fb733e5b1c8b775280fc3badb" Jan 31 05:58:28 crc kubenswrapper[5050]: I0131 05:58:28.699795 5050 scope.go:117] "RemoveContainer" containerID="3be545150fbdae10ff8219e4e1707b0375a2426a0895baa8afe636de57948123" Jan 31 05:58:39 crc kubenswrapper[5050]: I0131 05:58:39.017878 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:58:39 crc kubenswrapper[5050]: I0131 05:58:39.018709 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:59:08 crc kubenswrapper[5050]: I0131 05:59:08.435759 5050 generic.go:334] "Generic (PLEG): container finished" podID="ac858115-58fc-4cae-be54-8fc858f07268" containerID="a3cf576561ad940fa03094dcbe2dcc7448b798f55768022f03dc791561650feb" exitCode=0 Jan 31 05:59:08 crc kubenswrapper[5050]: I0131 05:59:08.435835 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" event={"ID":"ac858115-58fc-4cae-be54-8fc858f07268","Type":"ContainerDied","Data":"a3cf576561ad940fa03094dcbe2dcc7448b798f55768022f03dc791561650feb"} Jan 31 05:59:09 crc kubenswrapper[5050]: I0131 05:59:09.018095 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:59:09 crc kubenswrapper[5050]: I0131 05:59:09.018485 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:59:09 crc kubenswrapper[5050]: I0131 05:59:09.018552 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 05:59:09 crc kubenswrapper[5050]: I0131 05:59:09.019573 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:59:09 crc kubenswrapper[5050]: I0131 05:59:09.019673 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" gracePeriod=600 Jan 31 05:59:09 crc kubenswrapper[5050]: E0131 05:59:09.142132 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:59:09 crc kubenswrapper[5050]: I0131 05:59:09.451417 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" exitCode=0 Jan 31 05:59:09 crc kubenswrapper[5050]: I0131 05:59:09.451498 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239"} Jan 31 05:59:09 crc kubenswrapper[5050]: I0131 05:59:09.451565 5050 scope.go:117] "RemoveContainer" containerID="a0ffe130a70cd443f5238f6453ca9259b4bf9e12e4a2045bca916cd6b95e0823" Jan 31 05:59:09 crc kubenswrapper[5050]: I0131 05:59:09.452416 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 05:59:09 crc kubenswrapper[5050]: E0131 05:59:09.452683 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:59:09 crc kubenswrapper[5050]: I0131 05:59:09.967923 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.033216 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-inventory\") pod \"ac858115-58fc-4cae-be54-8fc858f07268\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.033294 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-bootstrap-combined-ca-bundle\") pod \"ac858115-58fc-4cae-be54-8fc858f07268\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.033322 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-ceph\") pod \"ac858115-58fc-4cae-be54-8fc858f07268\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.033357 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-ssh-key-openstack-edpm-ipam\") pod \"ac858115-58fc-4cae-be54-8fc858f07268\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.033403 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t7mh\" (UniqueName: \"kubernetes.io/projected/ac858115-58fc-4cae-be54-8fc858f07268-kube-api-access-8t7mh\") pod \"ac858115-58fc-4cae-be54-8fc858f07268\" (UID: \"ac858115-58fc-4cae-be54-8fc858f07268\") " Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.039253 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ac858115-58fc-4cae-be54-8fc858f07268" (UID: "ac858115-58fc-4cae-be54-8fc858f07268"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.039974 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac858115-58fc-4cae-be54-8fc858f07268-kube-api-access-8t7mh" (OuterVolumeSpecName: "kube-api-access-8t7mh") pod "ac858115-58fc-4cae-be54-8fc858f07268" (UID: "ac858115-58fc-4cae-be54-8fc858f07268"). InnerVolumeSpecName "kube-api-access-8t7mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.041717 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-ceph" (OuterVolumeSpecName: "ceph") pod "ac858115-58fc-4cae-be54-8fc858f07268" (UID: "ac858115-58fc-4cae-be54-8fc858f07268"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.078717 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-inventory" (OuterVolumeSpecName: "inventory") pod "ac858115-58fc-4cae-be54-8fc858f07268" (UID: "ac858115-58fc-4cae-be54-8fc858f07268"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.084788 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ac858115-58fc-4cae-be54-8fc858f07268" (UID: "ac858115-58fc-4cae-be54-8fc858f07268"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.134910 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t7mh\" (UniqueName: \"kubernetes.io/projected/ac858115-58fc-4cae-be54-8fc858f07268-kube-api-access-8t7mh\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.135024 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.135044 5050 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.135061 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.135078 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac858115-58fc-4cae-be54-8fc858f07268-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.466755 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" event={"ID":"ac858115-58fc-4cae-be54-8fc858f07268","Type":"ContainerDied","Data":"c5796a9bd07e25fb42d1ac71f48bcfbecddaee86a04db438caad5230b193eab3"} Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.467179 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5796a9bd07e25fb42d1ac71f48bcfbecddaee86a04db438caad5230b193eab3" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.466825 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m2976" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.586719 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl"] Jan 31 05:59:10 crc kubenswrapper[5050]: E0131 05:59:10.587223 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac858115-58fc-4cae-be54-8fc858f07268" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.587252 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac858115-58fc-4cae-be54-8fc858f07268" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.587520 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac858115-58fc-4cae-be54-8fc858f07268" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.588435 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.595979 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.596369 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.596698 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.596770 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.597242 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.610310 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl"] Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.646059 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkdvs\" (UniqueName: \"kubernetes.io/projected/4d011c07-7fee-4c90-a77a-387c778675c1-kube-api-access-jkdvs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.650127 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.650293 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.650390 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.752591 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkdvs\" (UniqueName: \"kubernetes.io/projected/4d011c07-7fee-4c90-a77a-387c778675c1-kube-api-access-jkdvs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.752713 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.752743 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.752769 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.757195 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.757888 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.760000 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.772866 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkdvs\" (UniqueName: \"kubernetes.io/projected/4d011c07-7fee-4c90-a77a-387c778675c1-kube-api-access-jkdvs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:10 crc kubenswrapper[5050]: I0131 05:59:10.945406 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:11 crc kubenswrapper[5050]: I0131 05:59:11.568543 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl"] Jan 31 05:59:12 crc kubenswrapper[5050]: I0131 05:59:12.488551 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" event={"ID":"4d011c07-7fee-4c90-a77a-387c778675c1","Type":"ContainerStarted","Data":"12206a45111a61bff94b546b45dc60116c69cbc0f692e1cfbf39efeec23aa856"} Jan 31 05:59:12 crc kubenswrapper[5050]: I0131 05:59:12.489137 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" event={"ID":"4d011c07-7fee-4c90-a77a-387c778675c1","Type":"ContainerStarted","Data":"130b1560518bcca014d7d53e05922c90f1da44215299802ceb6c4543e2baa284"} Jan 31 05:59:12 crc kubenswrapper[5050]: I0131 05:59:12.508443 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" podStartSLOduration=1.9610878409999999 podStartE2EDuration="2.508422074s" podCreationTimestamp="2026-01-31 05:59:10 +0000 UTC" firstStartedPulling="2026-01-31 05:59:11.573990436 +0000 UTC m=+2276.623152082" lastFinishedPulling="2026-01-31 05:59:12.121324679 +0000 UTC m=+2277.170486315" observedRunningTime="2026-01-31 05:59:12.50299245 +0000 UTC m=+2277.552154046" watchObservedRunningTime="2026-01-31 05:59:12.508422074 +0000 UTC m=+2277.557583670" Jan 31 05:59:22 crc kubenswrapper[5050]: I0131 05:59:22.736738 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 05:59:22 crc kubenswrapper[5050]: E0131 05:59:22.737757 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:59:35 crc kubenswrapper[5050]: I0131 05:59:35.746359 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 05:59:35 crc kubenswrapper[5050]: E0131 05:59:35.747516 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:59:36 crc kubenswrapper[5050]: I0131 05:59:36.757202 5050 generic.go:334] "Generic (PLEG): container finished" podID="4d011c07-7fee-4c90-a77a-387c778675c1" containerID="12206a45111a61bff94b546b45dc60116c69cbc0f692e1cfbf39efeec23aa856" exitCode=0 Jan 31 05:59:36 crc kubenswrapper[5050]: I0131 05:59:36.757284 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" event={"ID":"4d011c07-7fee-4c90-a77a-387c778675c1","Type":"ContainerDied","Data":"12206a45111a61bff94b546b45dc60116c69cbc0f692e1cfbf39efeec23aa856"} Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.229808 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.333223 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkdvs\" (UniqueName: \"kubernetes.io/projected/4d011c07-7fee-4c90-a77a-387c778675c1-kube-api-access-jkdvs\") pod \"4d011c07-7fee-4c90-a77a-387c778675c1\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.333332 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-ceph\") pod \"4d011c07-7fee-4c90-a77a-387c778675c1\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.333407 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-inventory\") pod \"4d011c07-7fee-4c90-a77a-387c778675c1\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.333491 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-ssh-key-openstack-edpm-ipam\") pod \"4d011c07-7fee-4c90-a77a-387c778675c1\" (UID: \"4d011c07-7fee-4c90-a77a-387c778675c1\") " Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.340453 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-ceph" (OuterVolumeSpecName: "ceph") pod "4d011c07-7fee-4c90-a77a-387c778675c1" (UID: "4d011c07-7fee-4c90-a77a-387c778675c1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.342326 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d011c07-7fee-4c90-a77a-387c778675c1-kube-api-access-jkdvs" (OuterVolumeSpecName: "kube-api-access-jkdvs") pod "4d011c07-7fee-4c90-a77a-387c778675c1" (UID: "4d011c07-7fee-4c90-a77a-387c778675c1"). InnerVolumeSpecName "kube-api-access-jkdvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.373169 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4d011c07-7fee-4c90-a77a-387c778675c1" (UID: "4d011c07-7fee-4c90-a77a-387c778675c1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.383745 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-inventory" (OuterVolumeSpecName: "inventory") pod "4d011c07-7fee-4c90-a77a-387c778675c1" (UID: "4d011c07-7fee-4c90-a77a-387c778675c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.436023 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.436331 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.436354 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d011c07-7fee-4c90-a77a-387c778675c1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.436371 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkdvs\" (UniqueName: \"kubernetes.io/projected/4d011c07-7fee-4c90-a77a-387c778675c1-kube-api-access-jkdvs\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.778683 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" event={"ID":"4d011c07-7fee-4c90-a77a-387c778675c1","Type":"ContainerDied","Data":"130b1560518bcca014d7d53e05922c90f1da44215299802ceb6c4543e2baa284"} Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.778726 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="130b1560518bcca014d7d53e05922c90f1da44215299802ceb6c4543e2baa284" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.778783 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.901171 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6"] Jan 31 05:59:38 crc kubenswrapper[5050]: E0131 05:59:38.901731 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d011c07-7fee-4c90-a77a-387c778675c1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.901768 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d011c07-7fee-4c90-a77a-387c778675c1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.902079 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d011c07-7fee-4c90-a77a-387c778675c1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.903089 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.909213 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.909464 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.910021 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.910096 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.915574 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:59:38 crc kubenswrapper[5050]: I0131 05:59:38.942384 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6"] Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.049253 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vllq6\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.049327 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vllq6\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.049449 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vllq6\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.049489 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rrdx\" (UniqueName: \"kubernetes.io/projected/050ca660-6308-4916-8b90-a3bffeca8e39-kube-api-access-6rrdx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vllq6\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.152073 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vllq6\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.152198 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vllq6\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.152379 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vllq6\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.152473 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rrdx\" (UniqueName: \"kubernetes.io/projected/050ca660-6308-4916-8b90-a3bffeca8e39-kube-api-access-6rrdx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vllq6\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.160210 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vllq6\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.160614 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vllq6\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.161271 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vllq6\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.175191 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rrdx\" (UniqueName: \"kubernetes.io/projected/050ca660-6308-4916-8b90-a3bffeca8e39-kube-api-access-6rrdx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vllq6\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.237810 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.616381 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6"] Jan 31 05:59:39 crc kubenswrapper[5050]: I0131 05:59:39.786589 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" event={"ID":"050ca660-6308-4916-8b90-a3bffeca8e39","Type":"ContainerStarted","Data":"2649540e779d50c685eaf77b17d7a53e08979c604acb5b516585a34b7cf2aa89"} Jan 31 05:59:40 crc kubenswrapper[5050]: I0131 05:59:40.796053 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" event={"ID":"050ca660-6308-4916-8b90-a3bffeca8e39","Type":"ContainerStarted","Data":"ea291546d3945d4c0f56efa1d5286bb97e0bdbca8ac9fd4418134e6bc8290cad"} Jan 31 05:59:40 crc kubenswrapper[5050]: I0131 05:59:40.819495 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" podStartSLOduration=2.2518620560000002 podStartE2EDuration="2.819472037s" podCreationTimestamp="2026-01-31 05:59:38 +0000 UTC" firstStartedPulling="2026-01-31 05:59:39.626967649 +0000 UTC m=+2304.676129245" lastFinishedPulling="2026-01-31 05:59:40.19457763 +0000 UTC m=+2305.243739226" observedRunningTime="2026-01-31 05:59:40.812164823 +0000 UTC m=+2305.861326439" watchObservedRunningTime="2026-01-31 05:59:40.819472037 +0000 UTC m=+2305.868633643" Jan 31 05:59:44 crc kubenswrapper[5050]: I0131 05:59:44.848438 5050 generic.go:334] "Generic (PLEG): container finished" podID="050ca660-6308-4916-8b90-a3bffeca8e39" containerID="ea291546d3945d4c0f56efa1d5286bb97e0bdbca8ac9fd4418134e6bc8290cad" exitCode=0 Jan 31 05:59:44 crc kubenswrapper[5050]: I0131 05:59:44.848490 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" event={"ID":"050ca660-6308-4916-8b90-a3bffeca8e39","Type":"ContainerDied","Data":"ea291546d3945d4c0f56efa1d5286bb97e0bdbca8ac9fd4418134e6bc8290cad"} Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.386481 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.520347 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rrdx\" (UniqueName: \"kubernetes.io/projected/050ca660-6308-4916-8b90-a3bffeca8e39-kube-api-access-6rrdx\") pod \"050ca660-6308-4916-8b90-a3bffeca8e39\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.520560 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-ceph\") pod \"050ca660-6308-4916-8b90-a3bffeca8e39\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.520676 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-inventory\") pod \"050ca660-6308-4916-8b90-a3bffeca8e39\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.520718 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-ssh-key-openstack-edpm-ipam\") pod \"050ca660-6308-4916-8b90-a3bffeca8e39\" (UID: \"050ca660-6308-4916-8b90-a3bffeca8e39\") " Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.529884 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-ceph" (OuterVolumeSpecName: "ceph") pod "050ca660-6308-4916-8b90-a3bffeca8e39" (UID: "050ca660-6308-4916-8b90-a3bffeca8e39"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.530040 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050ca660-6308-4916-8b90-a3bffeca8e39-kube-api-access-6rrdx" (OuterVolumeSpecName: "kube-api-access-6rrdx") pod "050ca660-6308-4916-8b90-a3bffeca8e39" (UID: "050ca660-6308-4916-8b90-a3bffeca8e39"). InnerVolumeSpecName "kube-api-access-6rrdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.556182 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-inventory" (OuterVolumeSpecName: "inventory") pod "050ca660-6308-4916-8b90-a3bffeca8e39" (UID: "050ca660-6308-4916-8b90-a3bffeca8e39"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.579667 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "050ca660-6308-4916-8b90-a3bffeca8e39" (UID: "050ca660-6308-4916-8b90-a3bffeca8e39"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.623141 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.623201 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.623221 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rrdx\" (UniqueName: \"kubernetes.io/projected/050ca660-6308-4916-8b90-a3bffeca8e39-kube-api-access-6rrdx\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.623242 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/050ca660-6308-4916-8b90-a3bffeca8e39-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.869453 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" event={"ID":"050ca660-6308-4916-8b90-a3bffeca8e39","Type":"ContainerDied","Data":"2649540e779d50c685eaf77b17d7a53e08979c604acb5b516585a34b7cf2aa89"} Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.869723 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2649540e779d50c685eaf77b17d7a53e08979c604acb5b516585a34b7cf2aa89" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.869537 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vllq6" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.967081 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr"] Jan 31 05:59:46 crc kubenswrapper[5050]: E0131 05:59:46.967571 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050ca660-6308-4916-8b90-a3bffeca8e39" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.967596 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="050ca660-6308-4916-8b90-a3bffeca8e39" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.967849 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="050ca660-6308-4916-8b90-a3bffeca8e39" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.968686 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.975515 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.975586 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.976114 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.976149 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.976294 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 05:59:46 crc kubenswrapper[5050]: I0131 05:59:46.976765 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr"] Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.035087 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2cttr\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.035697 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2cttr\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.035847 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2cttr\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.036189 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzbkh\" (UniqueName: \"kubernetes.io/projected/095509b4-0f95-44ce-aa1d-9ad98503fbac-kube-api-access-dzbkh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2cttr\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.138734 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzbkh\" (UniqueName: \"kubernetes.io/projected/095509b4-0f95-44ce-aa1d-9ad98503fbac-kube-api-access-dzbkh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2cttr\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.139048 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2cttr\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.139147 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2cttr\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.139237 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2cttr\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.146088 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2cttr\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.146239 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2cttr\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.147592 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2cttr\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.161435 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzbkh\" (UniqueName: \"kubernetes.io/projected/095509b4-0f95-44ce-aa1d-9ad98503fbac-kube-api-access-dzbkh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2cttr\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.295979 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.737437 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 05:59:47 crc kubenswrapper[5050]: E0131 05:59:47.738218 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 05:59:47 crc kubenswrapper[5050]: I0131 05:59:47.895080 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr"] Jan 31 05:59:48 crc kubenswrapper[5050]: I0131 05:59:48.888699 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" event={"ID":"095509b4-0f95-44ce-aa1d-9ad98503fbac","Type":"ContainerStarted","Data":"e67dbca15c2a9dd7aad22f7408eb1ccb911c9a060810e85acb1556350c716969"} Jan 31 05:59:48 crc kubenswrapper[5050]: I0131 05:59:48.889332 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" event={"ID":"095509b4-0f95-44ce-aa1d-9ad98503fbac","Type":"ContainerStarted","Data":"c955f2942ca6160f9749f860f3b2b356f7d148a6fb79bd93aa7b62ad50f36163"} Jan 31 05:59:48 crc kubenswrapper[5050]: I0131 05:59:48.925062 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" podStartSLOduration=2.528030106 podStartE2EDuration="2.925041376s" podCreationTimestamp="2026-01-31 05:59:46 +0000 UTC" firstStartedPulling="2026-01-31 05:59:47.908458351 +0000 UTC m=+2312.957619957" lastFinishedPulling="2026-01-31 05:59:48.305469601 +0000 UTC m=+2313.354631227" observedRunningTime="2026-01-31 05:59:48.914866785 +0000 UTC m=+2313.964028381" watchObservedRunningTime="2026-01-31 05:59:48.925041376 +0000 UTC m=+2313.974202982" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.161628 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l"] Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.163133 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.168406 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.170587 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.181098 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l"] Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.203817 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8932328d-7037-4c31-8ead-750b75c54e23-secret-volume\") pod \"collect-profiles-29497320-zth2l\" (UID: \"8932328d-7037-4c31-8ead-750b75c54e23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.203873 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj59v\" (UniqueName: \"kubernetes.io/projected/8932328d-7037-4c31-8ead-750b75c54e23-kube-api-access-dj59v\") pod \"collect-profiles-29497320-zth2l\" (UID: \"8932328d-7037-4c31-8ead-750b75c54e23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.203909 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8932328d-7037-4c31-8ead-750b75c54e23-config-volume\") pod \"collect-profiles-29497320-zth2l\" (UID: \"8932328d-7037-4c31-8ead-750b75c54e23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.304997 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8932328d-7037-4c31-8ead-750b75c54e23-secret-volume\") pod \"collect-profiles-29497320-zth2l\" (UID: \"8932328d-7037-4c31-8ead-750b75c54e23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.305049 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj59v\" (UniqueName: \"kubernetes.io/projected/8932328d-7037-4c31-8ead-750b75c54e23-kube-api-access-dj59v\") pod \"collect-profiles-29497320-zth2l\" (UID: \"8932328d-7037-4c31-8ead-750b75c54e23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.305107 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8932328d-7037-4c31-8ead-750b75c54e23-config-volume\") pod \"collect-profiles-29497320-zth2l\" (UID: \"8932328d-7037-4c31-8ead-750b75c54e23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.306251 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8932328d-7037-4c31-8ead-750b75c54e23-config-volume\") pod \"collect-profiles-29497320-zth2l\" (UID: \"8932328d-7037-4c31-8ead-750b75c54e23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.310577 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8932328d-7037-4c31-8ead-750b75c54e23-secret-volume\") pod \"collect-profiles-29497320-zth2l\" (UID: \"8932328d-7037-4c31-8ead-750b75c54e23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.320255 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj59v\" (UniqueName: \"kubernetes.io/projected/8932328d-7037-4c31-8ead-750b75c54e23-kube-api-access-dj59v\") pod \"collect-profiles-29497320-zth2l\" (UID: \"8932328d-7037-4c31-8ead-750b75c54e23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:00 crc kubenswrapper[5050]: I0131 06:00:00.501395 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:01 crc kubenswrapper[5050]: I0131 06:00:01.030435 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l"] Jan 31 06:00:02 crc kubenswrapper[5050]: I0131 06:00:02.025190 5050 generic.go:334] "Generic (PLEG): container finished" podID="8932328d-7037-4c31-8ead-750b75c54e23" containerID="d5f73fbe0ac83adfaef729c2961b262263dd3d6ce015bd41e92c049a71cf2b7e" exitCode=0 Jan 31 06:00:02 crc kubenswrapper[5050]: I0131 06:00:02.025308 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" event={"ID":"8932328d-7037-4c31-8ead-750b75c54e23","Type":"ContainerDied","Data":"d5f73fbe0ac83adfaef729c2961b262263dd3d6ce015bd41e92c049a71cf2b7e"} Jan 31 06:00:02 crc kubenswrapper[5050]: I0131 06:00:02.026141 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" event={"ID":"8932328d-7037-4c31-8ead-750b75c54e23","Type":"ContainerStarted","Data":"9e8b536553a6974960839575fe31b61568c397b8911d016c4e5a710040b23485"} Jan 31 06:00:02 crc kubenswrapper[5050]: I0131 06:00:02.737048 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:00:02 crc kubenswrapper[5050]: E0131 06:00:02.737493 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:00:03 crc kubenswrapper[5050]: I0131 06:00:03.588580 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:03 crc kubenswrapper[5050]: I0131 06:00:03.773409 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8932328d-7037-4c31-8ead-750b75c54e23-config-volume\") pod \"8932328d-7037-4c31-8ead-750b75c54e23\" (UID: \"8932328d-7037-4c31-8ead-750b75c54e23\") " Jan 31 06:00:03 crc kubenswrapper[5050]: I0131 06:00:03.773857 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj59v\" (UniqueName: \"kubernetes.io/projected/8932328d-7037-4c31-8ead-750b75c54e23-kube-api-access-dj59v\") pod \"8932328d-7037-4c31-8ead-750b75c54e23\" (UID: \"8932328d-7037-4c31-8ead-750b75c54e23\") " Jan 31 06:00:03 crc kubenswrapper[5050]: I0131 06:00:03.773981 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8932328d-7037-4c31-8ead-750b75c54e23-secret-volume\") pod \"8932328d-7037-4c31-8ead-750b75c54e23\" (UID: \"8932328d-7037-4c31-8ead-750b75c54e23\") " Jan 31 06:00:03 crc kubenswrapper[5050]: I0131 06:00:03.774283 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8932328d-7037-4c31-8ead-750b75c54e23-config-volume" (OuterVolumeSpecName: "config-volume") pod "8932328d-7037-4c31-8ead-750b75c54e23" (UID: "8932328d-7037-4c31-8ead-750b75c54e23"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:00:03 crc kubenswrapper[5050]: I0131 06:00:03.774550 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8932328d-7037-4c31-8ead-750b75c54e23-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:03 crc kubenswrapper[5050]: I0131 06:00:03.780311 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8932328d-7037-4c31-8ead-750b75c54e23-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8932328d-7037-4c31-8ead-750b75c54e23" (UID: "8932328d-7037-4c31-8ead-750b75c54e23"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:00:03 crc kubenswrapper[5050]: I0131 06:00:03.785139 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8932328d-7037-4c31-8ead-750b75c54e23-kube-api-access-dj59v" (OuterVolumeSpecName: "kube-api-access-dj59v") pod "8932328d-7037-4c31-8ead-750b75c54e23" (UID: "8932328d-7037-4c31-8ead-750b75c54e23"). InnerVolumeSpecName "kube-api-access-dj59v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:00:03 crc kubenswrapper[5050]: I0131 06:00:03.876047 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj59v\" (UniqueName: \"kubernetes.io/projected/8932328d-7037-4c31-8ead-750b75c54e23-kube-api-access-dj59v\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:03 crc kubenswrapper[5050]: I0131 06:00:03.876086 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8932328d-7037-4c31-8ead-750b75c54e23-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:04 crc kubenswrapper[5050]: I0131 06:00:04.047520 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" event={"ID":"8932328d-7037-4c31-8ead-750b75c54e23","Type":"ContainerDied","Data":"9e8b536553a6974960839575fe31b61568c397b8911d016c4e5a710040b23485"} Jan 31 06:00:04 crc kubenswrapper[5050]: I0131 06:00:04.047899 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e8b536553a6974960839575fe31b61568c397b8911d016c4e5a710040b23485" Jan 31 06:00:04 crc kubenswrapper[5050]: I0131 06:00:04.047641 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497320-zth2l" Jan 31 06:00:04 crc kubenswrapper[5050]: I0131 06:00:04.694418 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b"] Jan 31 06:00:04 crc kubenswrapper[5050]: I0131 06:00:04.707915 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497275-dzs5b"] Jan 31 06:00:05 crc kubenswrapper[5050]: I0131 06:00:05.757636 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5915d8a1-8561-481b-990d-60cd35f30d7c" path="/var/lib/kubelet/pods/5915d8a1-8561-481b-990d-60cd35f30d7c/volumes" Jan 31 06:00:16 crc kubenswrapper[5050]: I0131 06:00:16.736881 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:00:16 crc kubenswrapper[5050]: E0131 06:00:16.737836 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:00:22 crc kubenswrapper[5050]: I0131 06:00:22.925824 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zfsvm"] Jan 31 06:00:22 crc kubenswrapper[5050]: E0131 06:00:22.927034 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8932328d-7037-4c31-8ead-750b75c54e23" containerName="collect-profiles" Jan 31 06:00:22 crc kubenswrapper[5050]: I0131 06:00:22.927057 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8932328d-7037-4c31-8ead-750b75c54e23" containerName="collect-profiles" Jan 31 06:00:22 crc kubenswrapper[5050]: I0131 06:00:22.927383 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8932328d-7037-4c31-8ead-750b75c54e23" containerName="collect-profiles" Jan 31 06:00:22 crc kubenswrapper[5050]: I0131 06:00:22.929657 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:22 crc kubenswrapper[5050]: I0131 06:00:22.939862 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zfsvm"] Jan 31 06:00:23 crc kubenswrapper[5050]: I0131 06:00:23.043846 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edea326f-1f38-4448-b3a5-681840ea402f-catalog-content\") pod \"community-operators-zfsvm\" (UID: \"edea326f-1f38-4448-b3a5-681840ea402f\") " pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:23 crc kubenswrapper[5050]: I0131 06:00:23.044060 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edea326f-1f38-4448-b3a5-681840ea402f-utilities\") pod \"community-operators-zfsvm\" (UID: \"edea326f-1f38-4448-b3a5-681840ea402f\") " pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:23 crc kubenswrapper[5050]: I0131 06:00:23.044100 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2glss\" (UniqueName: \"kubernetes.io/projected/edea326f-1f38-4448-b3a5-681840ea402f-kube-api-access-2glss\") pod \"community-operators-zfsvm\" (UID: \"edea326f-1f38-4448-b3a5-681840ea402f\") " pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:23 crc kubenswrapper[5050]: I0131 06:00:23.145537 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edea326f-1f38-4448-b3a5-681840ea402f-utilities\") pod \"community-operators-zfsvm\" (UID: \"edea326f-1f38-4448-b3a5-681840ea402f\") " pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:23 crc kubenswrapper[5050]: I0131 06:00:23.145599 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2glss\" (UniqueName: \"kubernetes.io/projected/edea326f-1f38-4448-b3a5-681840ea402f-kube-api-access-2glss\") pod \"community-operators-zfsvm\" (UID: \"edea326f-1f38-4448-b3a5-681840ea402f\") " pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:23 crc kubenswrapper[5050]: I0131 06:00:23.145691 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edea326f-1f38-4448-b3a5-681840ea402f-catalog-content\") pod \"community-operators-zfsvm\" (UID: \"edea326f-1f38-4448-b3a5-681840ea402f\") " pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:23 crc kubenswrapper[5050]: I0131 06:00:23.146220 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edea326f-1f38-4448-b3a5-681840ea402f-utilities\") pod \"community-operators-zfsvm\" (UID: \"edea326f-1f38-4448-b3a5-681840ea402f\") " pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:23 crc kubenswrapper[5050]: I0131 06:00:23.146257 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edea326f-1f38-4448-b3a5-681840ea402f-catalog-content\") pod \"community-operators-zfsvm\" (UID: \"edea326f-1f38-4448-b3a5-681840ea402f\") " pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:23 crc kubenswrapper[5050]: I0131 06:00:23.164924 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2glss\" (UniqueName: \"kubernetes.io/projected/edea326f-1f38-4448-b3a5-681840ea402f-kube-api-access-2glss\") pod \"community-operators-zfsvm\" (UID: \"edea326f-1f38-4448-b3a5-681840ea402f\") " pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:23 crc kubenswrapper[5050]: I0131 06:00:23.254972 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:23 crc kubenswrapper[5050]: I0131 06:00:23.787827 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zfsvm"] Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.118114 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xvk7s"] Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.125697 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.125916 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvk7s"] Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.165477 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqfd\" (UniqueName: \"kubernetes.io/projected/f8765c14-bd09-49a8-949c-15f9b8d58ff7-kube-api-access-hvqfd\") pod \"redhat-marketplace-xvk7s\" (UID: \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\") " pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.165595 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8765c14-bd09-49a8-949c-15f9b8d58ff7-utilities\") pod \"redhat-marketplace-xvk7s\" (UID: \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\") " pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.165738 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8765c14-bd09-49a8-949c-15f9b8d58ff7-catalog-content\") pod \"redhat-marketplace-xvk7s\" (UID: \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\") " pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.256722 5050 generic.go:334] "Generic (PLEG): container finished" podID="edea326f-1f38-4448-b3a5-681840ea402f" containerID="38cb309461f043602676e3b612a156f9978317aed51d7205a9057479af1eb8b7" exitCode=0 Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.256778 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfsvm" event={"ID":"edea326f-1f38-4448-b3a5-681840ea402f","Type":"ContainerDied","Data":"38cb309461f043602676e3b612a156f9978317aed51d7205a9057479af1eb8b7"} Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.256807 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfsvm" event={"ID":"edea326f-1f38-4448-b3a5-681840ea402f","Type":"ContainerStarted","Data":"ad9233ac2ecc2a6ec87889093c401d6e1e24859c109eb819e4d4bdfb7ff04b95"} Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.268804 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvqfd\" (UniqueName: \"kubernetes.io/projected/f8765c14-bd09-49a8-949c-15f9b8d58ff7-kube-api-access-hvqfd\") pod \"redhat-marketplace-xvk7s\" (UID: \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\") " pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.268900 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8765c14-bd09-49a8-949c-15f9b8d58ff7-utilities\") pod \"redhat-marketplace-xvk7s\" (UID: \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\") " pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.268993 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8765c14-bd09-49a8-949c-15f9b8d58ff7-catalog-content\") pod \"redhat-marketplace-xvk7s\" (UID: \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\") " pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.271350 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8765c14-bd09-49a8-949c-15f9b8d58ff7-catalog-content\") pod \"redhat-marketplace-xvk7s\" (UID: \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\") " pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.272133 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8765c14-bd09-49a8-949c-15f9b8d58ff7-utilities\") pod \"redhat-marketplace-xvk7s\" (UID: \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\") " pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.312878 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvqfd\" (UniqueName: \"kubernetes.io/projected/f8765c14-bd09-49a8-949c-15f9b8d58ff7-kube-api-access-hvqfd\") pod \"redhat-marketplace-xvk7s\" (UID: \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\") " pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.464309 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:24 crc kubenswrapper[5050]: I0131 06:00:24.974134 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvk7s"] Jan 31 06:00:25 crc kubenswrapper[5050]: W0131 06:00:25.001799 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8765c14_bd09_49a8_949c_15f9b8d58ff7.slice/crio-3490f9113a14241d74e696bab1c2efad43de9c4b8afcd5dc5903c57d128a2740 WatchSource:0}: Error finding container 3490f9113a14241d74e696bab1c2efad43de9c4b8afcd5dc5903c57d128a2740: Status 404 returned error can't find the container with id 3490f9113a14241d74e696bab1c2efad43de9c4b8afcd5dc5903c57d128a2740 Jan 31 06:00:25 crc kubenswrapper[5050]: I0131 06:00:25.268483 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvk7s" event={"ID":"f8765c14-bd09-49a8-949c-15f9b8d58ff7","Type":"ContainerStarted","Data":"3490f9113a14241d74e696bab1c2efad43de9c4b8afcd5dc5903c57d128a2740"} Jan 31 06:00:25 crc kubenswrapper[5050]: I0131 06:00:25.270805 5050 generic.go:334] "Generic (PLEG): container finished" podID="095509b4-0f95-44ce-aa1d-9ad98503fbac" containerID="e67dbca15c2a9dd7aad22f7408eb1ccb911c9a060810e85acb1556350c716969" exitCode=0 Jan 31 06:00:25 crc kubenswrapper[5050]: I0131 06:00:25.270853 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" event={"ID":"095509b4-0f95-44ce-aa1d-9ad98503fbac","Type":"ContainerDied","Data":"e67dbca15c2a9dd7aad22f7408eb1ccb911c9a060810e85acb1556350c716969"} Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.295053 5050 generic.go:334] "Generic (PLEG): container finished" podID="f8765c14-bd09-49a8-949c-15f9b8d58ff7" containerID="01d2bdb21721e9e60a7431243fb0b54d1198b000a53a34c7cccd46f6e4070cd2" exitCode=0 Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.295217 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvk7s" event={"ID":"f8765c14-bd09-49a8-949c-15f9b8d58ff7","Type":"ContainerDied","Data":"01d2bdb21721e9e60a7431243fb0b54d1198b000a53a34c7cccd46f6e4070cd2"} Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.303276 5050 generic.go:334] "Generic (PLEG): container finished" podID="edea326f-1f38-4448-b3a5-681840ea402f" containerID="bd1edd6dcc5a065901d5cbde62f7a96210ab360f4256d81e674439449b42874d" exitCode=0 Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.304520 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfsvm" event={"ID":"edea326f-1f38-4448-b3a5-681840ea402f","Type":"ContainerDied","Data":"bd1edd6dcc5a065901d5cbde62f7a96210ab360f4256d81e674439449b42874d"} Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.680537 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.728906 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-ssh-key-openstack-edpm-ipam\") pod \"095509b4-0f95-44ce-aa1d-9ad98503fbac\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.729071 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzbkh\" (UniqueName: \"kubernetes.io/projected/095509b4-0f95-44ce-aa1d-9ad98503fbac-kube-api-access-dzbkh\") pod \"095509b4-0f95-44ce-aa1d-9ad98503fbac\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.729120 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-ceph\") pod \"095509b4-0f95-44ce-aa1d-9ad98503fbac\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.729169 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-inventory\") pod \"095509b4-0f95-44ce-aa1d-9ad98503fbac\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.738114 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-ceph" (OuterVolumeSpecName: "ceph") pod "095509b4-0f95-44ce-aa1d-9ad98503fbac" (UID: "095509b4-0f95-44ce-aa1d-9ad98503fbac"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.741000 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095509b4-0f95-44ce-aa1d-9ad98503fbac-kube-api-access-dzbkh" (OuterVolumeSpecName: "kube-api-access-dzbkh") pod "095509b4-0f95-44ce-aa1d-9ad98503fbac" (UID: "095509b4-0f95-44ce-aa1d-9ad98503fbac"). InnerVolumeSpecName "kube-api-access-dzbkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:00:26 crc kubenswrapper[5050]: E0131 06:00:26.755907 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-inventory podName:095509b4-0f95-44ce-aa1d-9ad98503fbac nodeName:}" failed. No retries permitted until 2026-01-31 06:00:27.255870639 +0000 UTC m=+2352.305032235 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-inventory") pod "095509b4-0f95-44ce-aa1d-9ad98503fbac" (UID: "095509b4-0f95-44ce-aa1d-9ad98503fbac") : error deleting /var/lib/kubelet/pods/095509b4-0f95-44ce-aa1d-9ad98503fbac/volume-subpaths: remove /var/lib/kubelet/pods/095509b4-0f95-44ce-aa1d-9ad98503fbac/volume-subpaths: no such file or directory Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.759402 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "095509b4-0f95-44ce-aa1d-9ad98503fbac" (UID: "095509b4-0f95-44ce-aa1d-9ad98503fbac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.831770 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.831886 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzbkh\" (UniqueName: \"kubernetes.io/projected/095509b4-0f95-44ce-aa1d-9ad98503fbac-kube-api-access-dzbkh\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:26 crc kubenswrapper[5050]: I0131 06:00:26.831896 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.319140 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfsvm" event={"ID":"edea326f-1f38-4448-b3a5-681840ea402f","Type":"ContainerStarted","Data":"ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede"} Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.331229 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" event={"ID":"095509b4-0f95-44ce-aa1d-9ad98503fbac","Type":"ContainerDied","Data":"c955f2942ca6160f9749f860f3b2b356f7d148a6fb79bd93aa7b62ad50f36163"} Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.331293 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c955f2942ca6160f9749f860f3b2b356f7d148a6fb79bd93aa7b62ad50f36163" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.331306 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2cttr" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.339587 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-inventory\") pod \"095509b4-0f95-44ce-aa1d-9ad98503fbac\" (UID: \"095509b4-0f95-44ce-aa1d-9ad98503fbac\") " Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.344190 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-inventory" (OuterVolumeSpecName: "inventory") pod "095509b4-0f95-44ce-aa1d-9ad98503fbac" (UID: "095509b4-0f95-44ce-aa1d-9ad98503fbac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.351258 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zfsvm" podStartSLOduration=2.896027194 podStartE2EDuration="5.35123637s" podCreationTimestamp="2026-01-31 06:00:22 +0000 UTC" firstStartedPulling="2026-01-31 06:00:24.259850256 +0000 UTC m=+2349.309011842" lastFinishedPulling="2026-01-31 06:00:26.715059422 +0000 UTC m=+2351.764221018" observedRunningTime="2026-01-31 06:00:27.339188519 +0000 UTC m=+2352.388350115" watchObservedRunningTime="2026-01-31 06:00:27.35123637 +0000 UTC m=+2352.400397976" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.383157 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm"] Jan 31 06:00:27 crc kubenswrapper[5050]: E0131 06:00:27.383740 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095509b4-0f95-44ce-aa1d-9ad98503fbac" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.383778 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="095509b4-0f95-44ce-aa1d-9ad98503fbac" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.384136 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="095509b4-0f95-44ce-aa1d-9ad98503fbac" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.385038 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.397380 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm"] Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.441848 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.442008 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.442038 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.442060 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75858\" (UniqueName: \"kubernetes.io/projected/55e315ea-e973-46ef-bf01-df247abf5353-kube-api-access-75858\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.442207 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/095509b4-0f95-44ce-aa1d-9ad98503fbac-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.544147 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.544499 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.544528 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75858\" (UniqueName: \"kubernetes.io/projected/55e315ea-e973-46ef-bf01-df247abf5353-kube-api-access-75858\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.544657 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.550194 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.550591 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.550860 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.560318 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75858\" (UniqueName: \"kubernetes.io/projected/55e315ea-e973-46ef-bf01-df247abf5353-kube-api-access-75858\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.708914 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:27 crc kubenswrapper[5050]: I0131 06:00:27.736089 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:00:27 crc kubenswrapper[5050]: E0131 06:00:27.736355 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:00:28 crc kubenswrapper[5050]: I0131 06:00:28.227054 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm"] Jan 31 06:00:28 crc kubenswrapper[5050]: I0131 06:00:28.342857 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" event={"ID":"55e315ea-e973-46ef-bf01-df247abf5353","Type":"ContainerStarted","Data":"ecb4a59bb8f3384cb77b0f8fb30acb1aeb1390f7e34eb470f4cee00d72ffbe66"} Jan 31 06:00:28 crc kubenswrapper[5050]: I0131 06:00:28.347598 5050 generic.go:334] "Generic (PLEG): container finished" podID="f8765c14-bd09-49a8-949c-15f9b8d58ff7" containerID="b3b1ad73dd950a875781435979ddc9f59c36d871c8ffa5316d0f7c236ebf77db" exitCode=0 Jan 31 06:00:28 crc kubenswrapper[5050]: I0131 06:00:28.347732 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvk7s" event={"ID":"f8765c14-bd09-49a8-949c-15f9b8d58ff7","Type":"ContainerDied","Data":"b3b1ad73dd950a875781435979ddc9f59c36d871c8ffa5316d0f7c236ebf77db"} Jan 31 06:00:28 crc kubenswrapper[5050]: I0131 06:00:28.850496 5050 scope.go:117] "RemoveContainer" containerID="02ce8716faf717215c1eeb2a1c91391df3342c073322f56540b995807b7c763d" Jan 31 06:00:29 crc kubenswrapper[5050]: I0131 06:00:29.382191 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" event={"ID":"55e315ea-e973-46ef-bf01-df247abf5353","Type":"ContainerStarted","Data":"b9c2d45dd47e777703cc7561ea77cdd7662736edafcb1efe658576a5665159b0"} Jan 31 06:00:29 crc kubenswrapper[5050]: I0131 06:00:29.386691 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvk7s" event={"ID":"f8765c14-bd09-49a8-949c-15f9b8d58ff7","Type":"ContainerStarted","Data":"d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2"} Jan 31 06:00:29 crc kubenswrapper[5050]: I0131 06:00:29.410258 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" podStartSLOduration=1.6616716679999999 podStartE2EDuration="2.410237357s" podCreationTimestamp="2026-01-31 06:00:27 +0000 UTC" firstStartedPulling="2026-01-31 06:00:28.237124316 +0000 UTC m=+2353.286285912" lastFinishedPulling="2026-01-31 06:00:28.985689994 +0000 UTC m=+2354.034851601" observedRunningTime="2026-01-31 06:00:29.401879885 +0000 UTC m=+2354.451041481" watchObservedRunningTime="2026-01-31 06:00:29.410237357 +0000 UTC m=+2354.459398953" Jan 31 06:00:29 crc kubenswrapper[5050]: I0131 06:00:29.424560 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xvk7s" podStartSLOduration=2.721122214 podStartE2EDuration="5.424541489s" podCreationTimestamp="2026-01-31 06:00:24 +0000 UTC" firstStartedPulling="2026-01-31 06:00:26.30331638 +0000 UTC m=+2351.352478016" lastFinishedPulling="2026-01-31 06:00:29.006735675 +0000 UTC m=+2354.055897291" observedRunningTime="2026-01-31 06:00:29.424425615 +0000 UTC m=+2354.473587211" watchObservedRunningTime="2026-01-31 06:00:29.424541489 +0000 UTC m=+2354.473703085" Jan 31 06:00:33 crc kubenswrapper[5050]: I0131 06:00:33.255603 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:33 crc kubenswrapper[5050]: I0131 06:00:33.256273 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:33 crc kubenswrapper[5050]: I0131 06:00:33.343855 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:33 crc kubenswrapper[5050]: I0131 06:00:33.428050 5050 generic.go:334] "Generic (PLEG): container finished" podID="55e315ea-e973-46ef-bf01-df247abf5353" containerID="b9c2d45dd47e777703cc7561ea77cdd7662736edafcb1efe658576a5665159b0" exitCode=0 Jan 31 06:00:33 crc kubenswrapper[5050]: I0131 06:00:33.428167 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" event={"ID":"55e315ea-e973-46ef-bf01-df247abf5353","Type":"ContainerDied","Data":"b9c2d45dd47e777703cc7561ea77cdd7662736edafcb1efe658576a5665159b0"} Jan 31 06:00:33 crc kubenswrapper[5050]: I0131 06:00:33.492515 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:34 crc kubenswrapper[5050]: I0131 06:00:34.465583 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:34 crc kubenswrapper[5050]: I0131 06:00:34.465640 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:34 crc kubenswrapper[5050]: I0131 06:00:34.534388 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:34 crc kubenswrapper[5050]: I0131 06:00:34.906853 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.022135 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-ssh-key-openstack-edpm-ipam\") pod \"55e315ea-e973-46ef-bf01-df247abf5353\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.022296 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-ceph\") pod \"55e315ea-e973-46ef-bf01-df247abf5353\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.022347 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75858\" (UniqueName: \"kubernetes.io/projected/55e315ea-e973-46ef-bf01-df247abf5353-kube-api-access-75858\") pod \"55e315ea-e973-46ef-bf01-df247abf5353\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.022366 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-inventory\") pod \"55e315ea-e973-46ef-bf01-df247abf5353\" (UID: \"55e315ea-e973-46ef-bf01-df247abf5353\") " Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.029177 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-ceph" (OuterVolumeSpecName: "ceph") pod "55e315ea-e973-46ef-bf01-df247abf5353" (UID: "55e315ea-e973-46ef-bf01-df247abf5353"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.029230 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e315ea-e973-46ef-bf01-df247abf5353-kube-api-access-75858" (OuterVolumeSpecName: "kube-api-access-75858") pod "55e315ea-e973-46ef-bf01-df247abf5353" (UID: "55e315ea-e973-46ef-bf01-df247abf5353"). InnerVolumeSpecName "kube-api-access-75858". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.050465 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "55e315ea-e973-46ef-bf01-df247abf5353" (UID: "55e315ea-e973-46ef-bf01-df247abf5353"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.065979 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-inventory" (OuterVolumeSpecName: "inventory") pod "55e315ea-e973-46ef-bf01-df247abf5353" (UID: "55e315ea-e973-46ef-bf01-df247abf5353"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.124548 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.124602 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.124628 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75858\" (UniqueName: \"kubernetes.io/projected/55e315ea-e973-46ef-bf01-df247abf5353-kube-api-access-75858\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.124646 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55e315ea-e973-46ef-bf01-df247abf5353-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.303503 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zfsvm"] Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.457789 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" event={"ID":"55e315ea-e973-46ef-bf01-df247abf5353","Type":"ContainerDied","Data":"ecb4a59bb8f3384cb77b0f8fb30acb1aeb1390f7e34eb470f4cee00d72ffbe66"} Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.457853 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecb4a59bb8f3384cb77b0f8fb30acb1aeb1390f7e34eb470f4cee00d72ffbe66" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.458176 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.458311 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zfsvm" podUID="edea326f-1f38-4448-b3a5-681840ea402f" containerName="registry-server" containerID="cri-o://ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede" gracePeriod=2 Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.546416 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.553396 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr"] Jan 31 06:00:35 crc kubenswrapper[5050]: E0131 06:00:35.553754 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e315ea-e973-46ef-bf01-df247abf5353" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.553776 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e315ea-e973-46ef-bf01-df247abf5353" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.553987 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="55e315ea-e973-46ef-bf01-df247abf5353" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.554613 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.559884 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.560400 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.560614 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.561390 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.561627 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.569204 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr"] Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.637324 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.637391 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjbqc\" (UniqueName: \"kubernetes.io/projected/483749fc-4acc-4fdf-94b0-359fb3d7a82e-kube-api-access-sjbqc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.637568 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.637604 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.740336 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.740414 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjbqc\" (UniqueName: \"kubernetes.io/projected/483749fc-4acc-4fdf-94b0-359fb3d7a82e-kube-api-access-sjbqc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.741131 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.741159 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.745605 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.745696 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.747830 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.758086 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjbqc\" (UniqueName: \"kubernetes.io/projected/483749fc-4acc-4fdf-94b0-359fb3d7a82e-kube-api-access-sjbqc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.866319 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:35 crc kubenswrapper[5050]: I0131 06:00:35.937477 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.047867 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edea326f-1f38-4448-b3a5-681840ea402f-catalog-content\") pod \"edea326f-1f38-4448-b3a5-681840ea402f\" (UID: \"edea326f-1f38-4448-b3a5-681840ea402f\") " Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.049758 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2glss\" (UniqueName: \"kubernetes.io/projected/edea326f-1f38-4448-b3a5-681840ea402f-kube-api-access-2glss\") pod \"edea326f-1f38-4448-b3a5-681840ea402f\" (UID: \"edea326f-1f38-4448-b3a5-681840ea402f\") " Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.049803 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edea326f-1f38-4448-b3a5-681840ea402f-utilities\") pod \"edea326f-1f38-4448-b3a5-681840ea402f\" (UID: \"edea326f-1f38-4448-b3a5-681840ea402f\") " Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.067266 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edea326f-1f38-4448-b3a5-681840ea402f-utilities" (OuterVolumeSpecName: "utilities") pod "edea326f-1f38-4448-b3a5-681840ea402f" (UID: "edea326f-1f38-4448-b3a5-681840ea402f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.100154 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edea326f-1f38-4448-b3a5-681840ea402f-kube-api-access-2glss" (OuterVolumeSpecName: "kube-api-access-2glss") pod "edea326f-1f38-4448-b3a5-681840ea402f" (UID: "edea326f-1f38-4448-b3a5-681840ea402f"). InnerVolumeSpecName "kube-api-access-2glss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.135912 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edea326f-1f38-4448-b3a5-681840ea402f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edea326f-1f38-4448-b3a5-681840ea402f" (UID: "edea326f-1f38-4448-b3a5-681840ea402f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.153017 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edea326f-1f38-4448-b3a5-681840ea402f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.153053 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2glss\" (UniqueName: \"kubernetes.io/projected/edea326f-1f38-4448-b3a5-681840ea402f-kube-api-access-2glss\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.153088 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edea326f-1f38-4448-b3a5-681840ea402f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.473696 5050 generic.go:334] "Generic (PLEG): container finished" podID="edea326f-1f38-4448-b3a5-681840ea402f" containerID="ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede" exitCode=0 Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.473811 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfsvm" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.473858 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfsvm" event={"ID":"edea326f-1f38-4448-b3a5-681840ea402f","Type":"ContainerDied","Data":"ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede"} Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.473934 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfsvm" event={"ID":"edea326f-1f38-4448-b3a5-681840ea402f","Type":"ContainerDied","Data":"ad9233ac2ecc2a6ec87889093c401d6e1e24859c109eb819e4d4bdfb7ff04b95"} Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.473990 5050 scope.go:117] "RemoveContainer" containerID="ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.509103 5050 scope.go:117] "RemoveContainer" containerID="bd1edd6dcc5a065901d5cbde62f7a96210ab360f4256d81e674439449b42874d" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.528258 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zfsvm"] Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.537877 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zfsvm"] Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.553651 5050 scope.go:117] "RemoveContainer" containerID="38cb309461f043602676e3b612a156f9978317aed51d7205a9057479af1eb8b7" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.575441 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr"] Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.599176 5050 scope.go:117] "RemoveContainer" containerID="ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede" Jan 31 06:00:36 crc kubenswrapper[5050]: E0131 06:00:36.600714 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede\": container with ID starting with ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede not found: ID does not exist" containerID="ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.600748 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede"} err="failed to get container status \"ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede\": rpc error: code = NotFound desc = could not find container \"ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede\": container with ID starting with ba027995debf49a3a1d2410bec85216e622428c40c14ff0e9de42f36290c4ede not found: ID does not exist" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.600771 5050 scope.go:117] "RemoveContainer" containerID="bd1edd6dcc5a065901d5cbde62f7a96210ab360f4256d81e674439449b42874d" Jan 31 06:00:36 crc kubenswrapper[5050]: E0131 06:00:36.601229 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd1edd6dcc5a065901d5cbde62f7a96210ab360f4256d81e674439449b42874d\": container with ID starting with bd1edd6dcc5a065901d5cbde62f7a96210ab360f4256d81e674439449b42874d not found: ID does not exist" containerID="bd1edd6dcc5a065901d5cbde62f7a96210ab360f4256d81e674439449b42874d" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.601278 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd1edd6dcc5a065901d5cbde62f7a96210ab360f4256d81e674439449b42874d"} err="failed to get container status \"bd1edd6dcc5a065901d5cbde62f7a96210ab360f4256d81e674439449b42874d\": rpc error: code = NotFound desc = could not find container \"bd1edd6dcc5a065901d5cbde62f7a96210ab360f4256d81e674439449b42874d\": container with ID starting with bd1edd6dcc5a065901d5cbde62f7a96210ab360f4256d81e674439449b42874d not found: ID does not exist" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.601300 5050 scope.go:117] "RemoveContainer" containerID="38cb309461f043602676e3b612a156f9978317aed51d7205a9057479af1eb8b7" Jan 31 06:00:36 crc kubenswrapper[5050]: E0131 06:00:36.601666 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38cb309461f043602676e3b612a156f9978317aed51d7205a9057479af1eb8b7\": container with ID starting with 38cb309461f043602676e3b612a156f9978317aed51d7205a9057479af1eb8b7 not found: ID does not exist" containerID="38cb309461f043602676e3b612a156f9978317aed51d7205a9057479af1eb8b7" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.601690 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cb309461f043602676e3b612a156f9978317aed51d7205a9057479af1eb8b7"} err="failed to get container status \"38cb309461f043602676e3b612a156f9978317aed51d7205a9057479af1eb8b7\": rpc error: code = NotFound desc = could not find container \"38cb309461f043602676e3b612a156f9978317aed51d7205a9057479af1eb8b7\": container with ID starting with 38cb309461f043602676e3b612a156f9978317aed51d7205a9057479af1eb8b7 not found: ID does not exist" Jan 31 06:00:36 crc kubenswrapper[5050]: I0131 06:00:36.707270 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvk7s"] Jan 31 06:00:37 crc kubenswrapper[5050]: I0131 06:00:37.483765 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" event={"ID":"483749fc-4acc-4fdf-94b0-359fb3d7a82e","Type":"ContainerStarted","Data":"69a883b12b2cec660bd2500053b5423b01e8d06d17af4ed6734cc7a6ad2504a7"} Jan 31 06:00:37 crc kubenswrapper[5050]: I0131 06:00:37.485650 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" event={"ID":"483749fc-4acc-4fdf-94b0-359fb3d7a82e","Type":"ContainerStarted","Data":"03dd8ef015b2ff942717687012eda478351a31a255ee10126630c55beb360d93"} Jan 31 06:00:37 crc kubenswrapper[5050]: I0131 06:00:37.485487 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xvk7s" podUID="f8765c14-bd09-49a8-949c-15f9b8d58ff7" containerName="registry-server" containerID="cri-o://d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2" gracePeriod=2 Jan 31 06:00:37 crc kubenswrapper[5050]: I0131 06:00:37.509205 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" podStartSLOduration=1.978956744 podStartE2EDuration="2.50918762s" podCreationTimestamp="2026-01-31 06:00:35 +0000 UTC" firstStartedPulling="2026-01-31 06:00:36.579234432 +0000 UTC m=+2361.628396038" lastFinishedPulling="2026-01-31 06:00:37.109465308 +0000 UTC m=+2362.158626914" observedRunningTime="2026-01-31 06:00:37.4994067 +0000 UTC m=+2362.548568296" watchObservedRunningTime="2026-01-31 06:00:37.50918762 +0000 UTC m=+2362.558349216" Jan 31 06:00:37 crc kubenswrapper[5050]: I0131 06:00:37.753321 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edea326f-1f38-4448-b3a5-681840ea402f" path="/var/lib/kubelet/pods/edea326f-1f38-4448-b3a5-681840ea402f/volumes" Jan 31 06:00:37 crc kubenswrapper[5050]: I0131 06:00:37.949616 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:37 crc kubenswrapper[5050]: I0131 06:00:37.987987 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8765c14-bd09-49a8-949c-15f9b8d58ff7-utilities\") pod \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\" (UID: \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\") " Jan 31 06:00:37 crc kubenswrapper[5050]: I0131 06:00:37.988435 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvqfd\" (UniqueName: \"kubernetes.io/projected/f8765c14-bd09-49a8-949c-15f9b8d58ff7-kube-api-access-hvqfd\") pod \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\" (UID: \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\") " Jan 31 06:00:37 crc kubenswrapper[5050]: I0131 06:00:37.988646 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8765c14-bd09-49a8-949c-15f9b8d58ff7-catalog-content\") pod \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\" (UID: \"f8765c14-bd09-49a8-949c-15f9b8d58ff7\") " Jan 31 06:00:37 crc kubenswrapper[5050]: I0131 06:00:37.989700 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8765c14-bd09-49a8-949c-15f9b8d58ff7-utilities" (OuterVolumeSpecName: "utilities") pod "f8765c14-bd09-49a8-949c-15f9b8d58ff7" (UID: "f8765c14-bd09-49a8-949c-15f9b8d58ff7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:00:37 crc kubenswrapper[5050]: I0131 06:00:37.995683 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8765c14-bd09-49a8-949c-15f9b8d58ff7-kube-api-access-hvqfd" (OuterVolumeSpecName: "kube-api-access-hvqfd") pod "f8765c14-bd09-49a8-949c-15f9b8d58ff7" (UID: "f8765c14-bd09-49a8-949c-15f9b8d58ff7"). InnerVolumeSpecName "kube-api-access-hvqfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.019982 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8765c14-bd09-49a8-949c-15f9b8d58ff7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8765c14-bd09-49a8-949c-15f9b8d58ff7" (UID: "f8765c14-bd09-49a8-949c-15f9b8d58ff7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.090909 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8765c14-bd09-49a8-949c-15f9b8d58ff7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.091275 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8765c14-bd09-49a8-949c-15f9b8d58ff7-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.091343 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvqfd\" (UniqueName: \"kubernetes.io/projected/f8765c14-bd09-49a8-949c-15f9b8d58ff7-kube-api-access-hvqfd\") on node \"crc\" DevicePath \"\"" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.507676 5050 generic.go:334] "Generic (PLEG): container finished" podID="f8765c14-bd09-49a8-949c-15f9b8d58ff7" containerID="d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2" exitCode=0 Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.507862 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvk7s" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.508344 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvk7s" event={"ID":"f8765c14-bd09-49a8-949c-15f9b8d58ff7","Type":"ContainerDied","Data":"d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2"} Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.508423 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvk7s" event={"ID":"f8765c14-bd09-49a8-949c-15f9b8d58ff7","Type":"ContainerDied","Data":"3490f9113a14241d74e696bab1c2efad43de9c4b8afcd5dc5903c57d128a2740"} Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.508452 5050 scope.go:117] "RemoveContainer" containerID="d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.535867 5050 scope.go:117] "RemoveContainer" containerID="b3b1ad73dd950a875781435979ddc9f59c36d871c8ffa5316d0f7c236ebf77db" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.554906 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvk7s"] Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.563208 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvk7s"] Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.563995 5050 scope.go:117] "RemoveContainer" containerID="01d2bdb21721e9e60a7431243fb0b54d1198b000a53a34c7cccd46f6e4070cd2" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.595830 5050 scope.go:117] "RemoveContainer" containerID="d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2" Jan 31 06:00:38 crc kubenswrapper[5050]: E0131 06:00:38.596198 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2\": container with ID starting with d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2 not found: ID does not exist" containerID="d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.596260 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2"} err="failed to get container status \"d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2\": rpc error: code = NotFound desc = could not find container \"d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2\": container with ID starting with d7f55d0083f234ca3eeb9577a8e3d5773eee18ede904494002cdc479008651c2 not found: ID does not exist" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.596290 5050 scope.go:117] "RemoveContainer" containerID="b3b1ad73dd950a875781435979ddc9f59c36d871c8ffa5316d0f7c236ebf77db" Jan 31 06:00:38 crc kubenswrapper[5050]: E0131 06:00:38.596512 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3b1ad73dd950a875781435979ddc9f59c36d871c8ffa5316d0f7c236ebf77db\": container with ID starting with b3b1ad73dd950a875781435979ddc9f59c36d871c8ffa5316d0f7c236ebf77db not found: ID does not exist" containerID="b3b1ad73dd950a875781435979ddc9f59c36d871c8ffa5316d0f7c236ebf77db" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.596539 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b1ad73dd950a875781435979ddc9f59c36d871c8ffa5316d0f7c236ebf77db"} err="failed to get container status \"b3b1ad73dd950a875781435979ddc9f59c36d871c8ffa5316d0f7c236ebf77db\": rpc error: code = NotFound desc = could not find container \"b3b1ad73dd950a875781435979ddc9f59c36d871c8ffa5316d0f7c236ebf77db\": container with ID starting with b3b1ad73dd950a875781435979ddc9f59c36d871c8ffa5316d0f7c236ebf77db not found: ID does not exist" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.596554 5050 scope.go:117] "RemoveContainer" containerID="01d2bdb21721e9e60a7431243fb0b54d1198b000a53a34c7cccd46f6e4070cd2" Jan 31 06:00:38 crc kubenswrapper[5050]: E0131 06:00:38.596829 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d2bdb21721e9e60a7431243fb0b54d1198b000a53a34c7cccd46f6e4070cd2\": container with ID starting with 01d2bdb21721e9e60a7431243fb0b54d1198b000a53a34c7cccd46f6e4070cd2 not found: ID does not exist" containerID="01d2bdb21721e9e60a7431243fb0b54d1198b000a53a34c7cccd46f6e4070cd2" Jan 31 06:00:38 crc kubenswrapper[5050]: I0131 06:00:38.596859 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d2bdb21721e9e60a7431243fb0b54d1198b000a53a34c7cccd46f6e4070cd2"} err="failed to get container status \"01d2bdb21721e9e60a7431243fb0b54d1198b000a53a34c7cccd46f6e4070cd2\": rpc error: code = NotFound desc = could not find container \"01d2bdb21721e9e60a7431243fb0b54d1198b000a53a34c7cccd46f6e4070cd2\": container with ID starting with 01d2bdb21721e9e60a7431243fb0b54d1198b000a53a34c7cccd46f6e4070cd2 not found: ID does not exist" Jan 31 06:00:39 crc kubenswrapper[5050]: I0131 06:00:39.736848 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:00:39 crc kubenswrapper[5050]: E0131 06:00:39.737731 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:00:39 crc kubenswrapper[5050]: I0131 06:00:39.750908 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8765c14-bd09-49a8-949c-15f9b8d58ff7" path="/var/lib/kubelet/pods/f8765c14-bd09-49a8-949c-15f9b8d58ff7/volumes" Jan 31 06:00:54 crc kubenswrapper[5050]: I0131 06:00:54.736459 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:00:54 crc kubenswrapper[5050]: E0131 06:00:54.737388 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.161595 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29497321-ntj5n"] Jan 31 06:01:00 crc kubenswrapper[5050]: E0131 06:01:00.162513 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8765c14-bd09-49a8-949c-15f9b8d58ff7" containerName="registry-server" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.162529 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8765c14-bd09-49a8-949c-15f9b8d58ff7" containerName="registry-server" Jan 31 06:01:00 crc kubenswrapper[5050]: E0131 06:01:00.162550 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edea326f-1f38-4448-b3a5-681840ea402f" containerName="extract-utilities" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.162559 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="edea326f-1f38-4448-b3a5-681840ea402f" containerName="extract-utilities" Jan 31 06:01:00 crc kubenswrapper[5050]: E0131 06:01:00.162577 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edea326f-1f38-4448-b3a5-681840ea402f" containerName="registry-server" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.162586 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="edea326f-1f38-4448-b3a5-681840ea402f" containerName="registry-server" Jan 31 06:01:00 crc kubenswrapper[5050]: E0131 06:01:00.162599 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edea326f-1f38-4448-b3a5-681840ea402f" containerName="extract-content" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.162607 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="edea326f-1f38-4448-b3a5-681840ea402f" containerName="extract-content" Jan 31 06:01:00 crc kubenswrapper[5050]: E0131 06:01:00.162635 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8765c14-bd09-49a8-949c-15f9b8d58ff7" containerName="extract-utilities" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.162643 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8765c14-bd09-49a8-949c-15f9b8d58ff7" containerName="extract-utilities" Jan 31 06:01:00 crc kubenswrapper[5050]: E0131 06:01:00.162657 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8765c14-bd09-49a8-949c-15f9b8d58ff7" containerName="extract-content" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.162665 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8765c14-bd09-49a8-949c-15f9b8d58ff7" containerName="extract-content" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.162887 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="edea326f-1f38-4448-b3a5-681840ea402f" containerName="registry-server" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.162921 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8765c14-bd09-49a8-949c-15f9b8d58ff7" containerName="registry-server" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.163600 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.177473 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497321-ntj5n"] Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.213325 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-fernet-keys\") pod \"keystone-cron-29497321-ntj5n\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.213403 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-combined-ca-bundle\") pod \"keystone-cron-29497321-ntj5n\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.213484 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-config-data\") pod \"keystone-cron-29497321-ntj5n\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.213540 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmm5k\" (UniqueName: \"kubernetes.io/projected/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-kube-api-access-vmm5k\") pod \"keystone-cron-29497321-ntj5n\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.315352 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-combined-ca-bundle\") pod \"keystone-cron-29497321-ntj5n\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.315458 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-config-data\") pod \"keystone-cron-29497321-ntj5n\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.315507 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmm5k\" (UniqueName: \"kubernetes.io/projected/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-kube-api-access-vmm5k\") pod \"keystone-cron-29497321-ntj5n\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.315559 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-fernet-keys\") pod \"keystone-cron-29497321-ntj5n\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.321290 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-combined-ca-bundle\") pod \"keystone-cron-29497321-ntj5n\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.327802 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-config-data\") pod \"keystone-cron-29497321-ntj5n\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.329232 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-fernet-keys\") pod \"keystone-cron-29497321-ntj5n\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.332658 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmm5k\" (UniqueName: \"kubernetes.io/projected/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-kube-api-access-vmm5k\") pod \"keystone-cron-29497321-ntj5n\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.485129 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:00 crc kubenswrapper[5050]: I0131 06:01:00.987422 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497321-ntj5n"] Jan 31 06:01:01 crc kubenswrapper[5050]: I0131 06:01:01.725911 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497321-ntj5n" event={"ID":"ab1681a7-2cdf-4cfe-a909-91b36ff079aa","Type":"ContainerStarted","Data":"7b14b2804dcd8254bcce9374133effcc7242b59952302fa9e065d00dd439ac85"} Jan 31 06:01:01 crc kubenswrapper[5050]: I0131 06:01:01.726270 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497321-ntj5n" event={"ID":"ab1681a7-2cdf-4cfe-a909-91b36ff079aa","Type":"ContainerStarted","Data":"8a5d1d59f1e841dfe043d81687f0c7c169fe727f280f8b44a4e2c631a4636aac"} Jan 31 06:01:01 crc kubenswrapper[5050]: I0131 06:01:01.749688 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29497321-ntj5n" podStartSLOduration=1.749668499 podStartE2EDuration="1.749668499s" podCreationTimestamp="2026-01-31 06:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:01:01.742341923 +0000 UTC m=+2386.791503519" watchObservedRunningTime="2026-01-31 06:01:01.749668499 +0000 UTC m=+2386.798830095" Jan 31 06:01:03 crc kubenswrapper[5050]: I0131 06:01:03.748971 5050 generic.go:334] "Generic (PLEG): container finished" podID="ab1681a7-2cdf-4cfe-a909-91b36ff079aa" containerID="7b14b2804dcd8254bcce9374133effcc7242b59952302fa9e065d00dd439ac85" exitCode=0 Jan 31 06:01:03 crc kubenswrapper[5050]: I0131 06:01:03.749410 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497321-ntj5n" event={"ID":"ab1681a7-2cdf-4cfe-a909-91b36ff079aa","Type":"ContainerDied","Data":"7b14b2804dcd8254bcce9374133effcc7242b59952302fa9e065d00dd439ac85"} Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.134900 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.210898 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-config-data\") pod \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.211126 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-combined-ca-bundle\") pod \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.212325 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmm5k\" (UniqueName: \"kubernetes.io/projected/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-kube-api-access-vmm5k\") pod \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.212401 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-fernet-keys\") pod \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\" (UID: \"ab1681a7-2cdf-4cfe-a909-91b36ff079aa\") " Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.217995 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-kube-api-access-vmm5k" (OuterVolumeSpecName: "kube-api-access-vmm5k") pod "ab1681a7-2cdf-4cfe-a909-91b36ff079aa" (UID: "ab1681a7-2cdf-4cfe-a909-91b36ff079aa"). InnerVolumeSpecName "kube-api-access-vmm5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.231182 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ab1681a7-2cdf-4cfe-a909-91b36ff079aa" (UID: "ab1681a7-2cdf-4cfe-a909-91b36ff079aa"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.241246 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab1681a7-2cdf-4cfe-a909-91b36ff079aa" (UID: "ab1681a7-2cdf-4cfe-a909-91b36ff079aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.264910 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-config-data" (OuterVolumeSpecName: "config-data") pod "ab1681a7-2cdf-4cfe-a909-91b36ff079aa" (UID: "ab1681a7-2cdf-4cfe-a909-91b36ff079aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.314913 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmm5k\" (UniqueName: \"kubernetes.io/projected/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-kube-api-access-vmm5k\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.314985 5050 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.315005 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.315021 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1681a7-2cdf-4cfe-a909-91b36ff079aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.744129 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:01:05 crc kubenswrapper[5050]: E0131 06:01:05.744450 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.765661 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497321-ntj5n" event={"ID":"ab1681a7-2cdf-4cfe-a909-91b36ff079aa","Type":"ContainerDied","Data":"8a5d1d59f1e841dfe043d81687f0c7c169fe727f280f8b44a4e2c631a4636aac"} Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.765710 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a5d1d59f1e841dfe043d81687f0c7c169fe727f280f8b44a4e2c631a4636aac" Jan 31 06:01:05 crc kubenswrapper[5050]: I0131 06:01:05.765770 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497321-ntj5n" Jan 31 06:01:16 crc kubenswrapper[5050]: I0131 06:01:16.737216 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:01:16 crc kubenswrapper[5050]: E0131 06:01:16.738649 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:01:17 crc kubenswrapper[5050]: I0131 06:01:17.886092 5050 generic.go:334] "Generic (PLEG): container finished" podID="483749fc-4acc-4fdf-94b0-359fb3d7a82e" containerID="69a883b12b2cec660bd2500053b5423b01e8d06d17af4ed6734cc7a6ad2504a7" exitCode=0 Jan 31 06:01:17 crc kubenswrapper[5050]: I0131 06:01:17.886443 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" event={"ID":"483749fc-4acc-4fdf-94b0-359fb3d7a82e","Type":"ContainerDied","Data":"69a883b12b2cec660bd2500053b5423b01e8d06d17af4ed6734cc7a6ad2504a7"} Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.287863 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.391717 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjbqc\" (UniqueName: \"kubernetes.io/projected/483749fc-4acc-4fdf-94b0-359fb3d7a82e-kube-api-access-sjbqc\") pod \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.391789 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-inventory\") pod \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.391837 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-ssh-key-openstack-edpm-ipam\") pod \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.391983 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-ceph\") pod \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\" (UID: \"483749fc-4acc-4fdf-94b0-359fb3d7a82e\") " Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.397238 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483749fc-4acc-4fdf-94b0-359fb3d7a82e-kube-api-access-sjbqc" (OuterVolumeSpecName: "kube-api-access-sjbqc") pod "483749fc-4acc-4fdf-94b0-359fb3d7a82e" (UID: "483749fc-4acc-4fdf-94b0-359fb3d7a82e"). InnerVolumeSpecName "kube-api-access-sjbqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.398114 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-ceph" (OuterVolumeSpecName: "ceph") pod "483749fc-4acc-4fdf-94b0-359fb3d7a82e" (UID: "483749fc-4acc-4fdf-94b0-359fb3d7a82e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.424089 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "483749fc-4acc-4fdf-94b0-359fb3d7a82e" (UID: "483749fc-4acc-4fdf-94b0-359fb3d7a82e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.430298 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-inventory" (OuterVolumeSpecName: "inventory") pod "483749fc-4acc-4fdf-94b0-359fb3d7a82e" (UID: "483749fc-4acc-4fdf-94b0-359fb3d7a82e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.493655 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjbqc\" (UniqueName: \"kubernetes.io/projected/483749fc-4acc-4fdf-94b0-359fb3d7a82e-kube-api-access-sjbqc\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.493695 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.493706 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.493715 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/483749fc-4acc-4fdf-94b0-359fb3d7a82e-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.904252 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" event={"ID":"483749fc-4acc-4fdf-94b0-359fb3d7a82e","Type":"ContainerDied","Data":"03dd8ef015b2ff942717687012eda478351a31a255ee10126630c55beb360d93"} Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.904530 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03dd8ef015b2ff942717687012eda478351a31a255ee10126630c55beb360d93" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.904323 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.988298 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ddd6l"] Jan 31 06:01:19 crc kubenswrapper[5050]: E0131 06:01:19.988658 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1681a7-2cdf-4cfe-a909-91b36ff079aa" containerName="keystone-cron" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.988674 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1681a7-2cdf-4cfe-a909-91b36ff079aa" containerName="keystone-cron" Jan 31 06:01:19 crc kubenswrapper[5050]: E0131 06:01:19.988692 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483749fc-4acc-4fdf-94b0-359fb3d7a82e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.988699 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="483749fc-4acc-4fdf-94b0-359fb3d7a82e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.988851 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1681a7-2cdf-4cfe-a909-91b36ff079aa" containerName="keystone-cron" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.988863 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="483749fc-4acc-4fdf-94b0-359fb3d7a82e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.989536 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.992672 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.992758 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.992872 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.993530 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 06:01:19 crc kubenswrapper[5050]: I0131 06:01:19.995162 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.008426 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ddd6l"] Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.110117 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rllch\" (UniqueName: \"kubernetes.io/projected/54089fa6-6fa3-4f57-a554-eb47674a935f-kube-api-access-rllch\") pod \"ssh-known-hosts-edpm-deployment-ddd6l\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.110264 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ddd6l\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.110437 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-ceph\") pod \"ssh-known-hosts-edpm-deployment-ddd6l\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.110529 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ddd6l\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.212584 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rllch\" (UniqueName: \"kubernetes.io/projected/54089fa6-6fa3-4f57-a554-eb47674a935f-kube-api-access-rllch\") pod \"ssh-known-hosts-edpm-deployment-ddd6l\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.212752 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ddd6l\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.212839 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-ceph\") pod \"ssh-known-hosts-edpm-deployment-ddd6l\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.212915 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ddd6l\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.217706 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ddd6l\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.217752 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-ceph\") pod \"ssh-known-hosts-edpm-deployment-ddd6l\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.218387 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ddd6l\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.247618 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rllch\" (UniqueName: \"kubernetes.io/projected/54089fa6-6fa3-4f57-a554-eb47674a935f-kube-api-access-rllch\") pod \"ssh-known-hosts-edpm-deployment-ddd6l\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.322934 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.805905 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ddd6l"] Jan 31 06:01:20 crc kubenswrapper[5050]: W0131 06:01:20.807297 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54089fa6_6fa3_4f57_a554_eb47674a935f.slice/crio-ab6b3d37fcc8ec105ceab31b94f32e2dbe7b53eb14809f839b574a42fcdeff76 WatchSource:0}: Error finding container ab6b3d37fcc8ec105ceab31b94f32e2dbe7b53eb14809f839b574a42fcdeff76: Status 404 returned error can't find the container with id ab6b3d37fcc8ec105ceab31b94f32e2dbe7b53eb14809f839b574a42fcdeff76 Jan 31 06:01:20 crc kubenswrapper[5050]: I0131 06:01:20.923355 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" event={"ID":"54089fa6-6fa3-4f57-a554-eb47674a935f","Type":"ContainerStarted","Data":"ab6b3d37fcc8ec105ceab31b94f32e2dbe7b53eb14809f839b574a42fcdeff76"} Jan 31 06:01:21 crc kubenswrapper[5050]: I0131 06:01:21.943268 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" event={"ID":"54089fa6-6fa3-4f57-a554-eb47674a935f","Type":"ContainerStarted","Data":"ca40b8cd0188376701a05b98f268a239b04563465cb4f69207e28596296d97e9"} Jan 31 06:01:21 crc kubenswrapper[5050]: I0131 06:01:21.960975 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" podStartSLOduration=2.491007855 podStartE2EDuration="2.960936866s" podCreationTimestamp="2026-01-31 06:01:19 +0000 UTC" firstStartedPulling="2026-01-31 06:01:20.810157468 +0000 UTC m=+2405.859319074" lastFinishedPulling="2026-01-31 06:01:21.280086489 +0000 UTC m=+2406.329248085" observedRunningTime="2026-01-31 06:01:21.957785411 +0000 UTC m=+2407.006947027" watchObservedRunningTime="2026-01-31 06:01:21.960936866 +0000 UTC m=+2407.010098482" Jan 31 06:01:28 crc kubenswrapper[5050]: I0131 06:01:28.736214 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:01:28 crc kubenswrapper[5050]: E0131 06:01:28.737037 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:01:30 crc kubenswrapper[5050]: I0131 06:01:30.023138 5050 generic.go:334] "Generic (PLEG): container finished" podID="54089fa6-6fa3-4f57-a554-eb47674a935f" containerID="ca40b8cd0188376701a05b98f268a239b04563465cb4f69207e28596296d97e9" exitCode=0 Jan 31 06:01:30 crc kubenswrapper[5050]: I0131 06:01:30.023256 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" event={"ID":"54089fa6-6fa3-4f57-a554-eb47674a935f","Type":"ContainerDied","Data":"ca40b8cd0188376701a05b98f268a239b04563465cb4f69207e28596296d97e9"} Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.437760 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.529829 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-ceph\") pod \"54089fa6-6fa3-4f57-a554-eb47674a935f\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.529891 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rllch\" (UniqueName: \"kubernetes.io/projected/54089fa6-6fa3-4f57-a554-eb47674a935f-kube-api-access-rllch\") pod \"54089fa6-6fa3-4f57-a554-eb47674a935f\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.529990 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-ssh-key-openstack-edpm-ipam\") pod \"54089fa6-6fa3-4f57-a554-eb47674a935f\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.530108 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-inventory-0\") pod \"54089fa6-6fa3-4f57-a554-eb47674a935f\" (UID: \"54089fa6-6fa3-4f57-a554-eb47674a935f\") " Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.535579 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-ceph" (OuterVolumeSpecName: "ceph") pod "54089fa6-6fa3-4f57-a554-eb47674a935f" (UID: "54089fa6-6fa3-4f57-a554-eb47674a935f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.535894 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54089fa6-6fa3-4f57-a554-eb47674a935f-kube-api-access-rllch" (OuterVolumeSpecName: "kube-api-access-rllch") pod "54089fa6-6fa3-4f57-a554-eb47674a935f" (UID: "54089fa6-6fa3-4f57-a554-eb47674a935f"). InnerVolumeSpecName "kube-api-access-rllch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.558811 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "54089fa6-6fa3-4f57-a554-eb47674a935f" (UID: "54089fa6-6fa3-4f57-a554-eb47674a935f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.558884 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "54089fa6-6fa3-4f57-a554-eb47674a935f" (UID: "54089fa6-6fa3-4f57-a554-eb47674a935f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.633270 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.633345 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rllch\" (UniqueName: \"kubernetes.io/projected/54089fa6-6fa3-4f57-a554-eb47674a935f-kube-api-access-rllch\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.633362 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:31 crc kubenswrapper[5050]: I0131 06:01:31.633374 5050 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/54089fa6-6fa3-4f57-a554-eb47674a935f-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.063606 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" event={"ID":"54089fa6-6fa3-4f57-a554-eb47674a935f","Type":"ContainerDied","Data":"ab6b3d37fcc8ec105ceab31b94f32e2dbe7b53eb14809f839b574a42fcdeff76"} Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.063910 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab6b3d37fcc8ec105ceab31b94f32e2dbe7b53eb14809f839b574a42fcdeff76" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.063759 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ddd6l" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.129298 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7"] Jan 31 06:01:32 crc kubenswrapper[5050]: E0131 06:01:32.129746 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54089fa6-6fa3-4f57-a554-eb47674a935f" containerName="ssh-known-hosts-edpm-deployment" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.129774 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="54089fa6-6fa3-4f57-a554-eb47674a935f" containerName="ssh-known-hosts-edpm-deployment" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.130017 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="54089fa6-6fa3-4f57-a554-eb47674a935f" containerName="ssh-known-hosts-edpm-deployment" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.130795 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.134243 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.134348 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.134247 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.134253 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.134557 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.139040 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7"] Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.283245 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5rc7\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.283392 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5rc7\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.283560 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gsl6\" (UniqueName: \"kubernetes.io/projected/6cbec4ed-d7d5-45f2-8919-96d339becbba-kube-api-access-5gsl6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5rc7\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.283601 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5rc7\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.385479 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gsl6\" (UniqueName: \"kubernetes.io/projected/6cbec4ed-d7d5-45f2-8919-96d339becbba-kube-api-access-5gsl6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5rc7\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.385616 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5rc7\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.385845 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5rc7\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.385943 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5rc7\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.391152 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5rc7\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.391834 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5rc7\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.397646 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5rc7\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.404209 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gsl6\" (UniqueName: \"kubernetes.io/projected/6cbec4ed-d7d5-45f2-8919-96d339becbba-kube-api-access-5gsl6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n5rc7\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.492791 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.812982 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7"] Jan 31 06:01:32 crc kubenswrapper[5050]: I0131 06:01:32.819412 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 06:01:33 crc kubenswrapper[5050]: I0131 06:01:33.074838 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" event={"ID":"6cbec4ed-d7d5-45f2-8919-96d339becbba","Type":"ContainerStarted","Data":"98b6f801981d6a81f3ac5167f043fca1eb25d67cefeedbe724216035f71d9b92"} Jan 31 06:01:34 crc kubenswrapper[5050]: I0131 06:01:34.089532 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" event={"ID":"6cbec4ed-d7d5-45f2-8919-96d339becbba","Type":"ContainerStarted","Data":"911893cbe101d71a9558cece701df38d85e1b59a5a39d3de196534de62389a7c"} Jan 31 06:01:34 crc kubenswrapper[5050]: I0131 06:01:34.111159 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" podStartSLOduration=1.584293222 podStartE2EDuration="2.111135138s" podCreationTimestamp="2026-01-31 06:01:32 +0000 UTC" firstStartedPulling="2026-01-31 06:01:32.8191447 +0000 UTC m=+2417.868306296" lastFinishedPulling="2026-01-31 06:01:33.345986616 +0000 UTC m=+2418.395148212" observedRunningTime="2026-01-31 06:01:34.107335026 +0000 UTC m=+2419.156496642" watchObservedRunningTime="2026-01-31 06:01:34.111135138 +0000 UTC m=+2419.160296744" Jan 31 06:01:40 crc kubenswrapper[5050]: I0131 06:01:40.136667 5050 generic.go:334] "Generic (PLEG): container finished" podID="6cbec4ed-d7d5-45f2-8919-96d339becbba" containerID="911893cbe101d71a9558cece701df38d85e1b59a5a39d3de196534de62389a7c" exitCode=0 Jan 31 06:01:40 crc kubenswrapper[5050]: I0131 06:01:40.136772 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" event={"ID":"6cbec4ed-d7d5-45f2-8919-96d339becbba","Type":"ContainerDied","Data":"911893cbe101d71a9558cece701df38d85e1b59a5a39d3de196534de62389a7c"} Jan 31 06:01:40 crc kubenswrapper[5050]: I0131 06:01:40.737090 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:01:40 crc kubenswrapper[5050]: E0131 06:01:40.737785 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.553054 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.662769 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gsl6\" (UniqueName: \"kubernetes.io/projected/6cbec4ed-d7d5-45f2-8919-96d339becbba-kube-api-access-5gsl6\") pod \"6cbec4ed-d7d5-45f2-8919-96d339becbba\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.663162 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-inventory\") pod \"6cbec4ed-d7d5-45f2-8919-96d339becbba\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.663204 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-ssh-key-openstack-edpm-ipam\") pod \"6cbec4ed-d7d5-45f2-8919-96d339becbba\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.663290 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-ceph\") pod \"6cbec4ed-d7d5-45f2-8919-96d339becbba\" (UID: \"6cbec4ed-d7d5-45f2-8919-96d339becbba\") " Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.669120 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-ceph" (OuterVolumeSpecName: "ceph") pod "6cbec4ed-d7d5-45f2-8919-96d339becbba" (UID: "6cbec4ed-d7d5-45f2-8919-96d339becbba"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.672207 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cbec4ed-d7d5-45f2-8919-96d339becbba-kube-api-access-5gsl6" (OuterVolumeSpecName: "kube-api-access-5gsl6") pod "6cbec4ed-d7d5-45f2-8919-96d339becbba" (UID: "6cbec4ed-d7d5-45f2-8919-96d339becbba"). InnerVolumeSpecName "kube-api-access-5gsl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.690208 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-inventory" (OuterVolumeSpecName: "inventory") pod "6cbec4ed-d7d5-45f2-8919-96d339becbba" (UID: "6cbec4ed-d7d5-45f2-8919-96d339becbba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.691298 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6cbec4ed-d7d5-45f2-8919-96d339becbba" (UID: "6cbec4ed-d7d5-45f2-8919-96d339becbba"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.765701 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.765737 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.765750 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cbec4ed-d7d5-45f2-8919-96d339becbba-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:41 crc kubenswrapper[5050]: I0131 06:01:41.765763 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gsl6\" (UniqueName: \"kubernetes.io/projected/6cbec4ed-d7d5-45f2-8919-96d339becbba-kube-api-access-5gsl6\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.162399 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" event={"ID":"6cbec4ed-d7d5-45f2-8919-96d339becbba","Type":"ContainerDied","Data":"98b6f801981d6a81f3ac5167f043fca1eb25d67cefeedbe724216035f71d9b92"} Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.162431 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98b6f801981d6a81f3ac5167f043fca1eb25d67cefeedbe724216035f71d9b92" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.162434 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n5rc7" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.234145 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc"] Jan 31 06:01:42 crc kubenswrapper[5050]: E0131 06:01:42.234926 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbec4ed-d7d5-45f2-8919-96d339becbba" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.235044 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbec4ed-d7d5-45f2-8919-96d339becbba" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.235285 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cbec4ed-d7d5-45f2-8919-96d339becbba" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.235881 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.241134 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.241663 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.241932 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.242218 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.241921 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc"] Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.242453 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.399943 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-969b9\" (UniqueName: \"kubernetes.io/projected/908ba466-3385-45bb-8c51-22e8142da678-kube-api-access-969b9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.400275 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.400425 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.400763 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.501919 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.502012 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-969b9\" (UniqueName: \"kubernetes.io/projected/908ba466-3385-45bb-8c51-22e8142da678-kube-api-access-969b9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.502062 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.502088 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.506402 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.508018 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.508484 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.521188 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-969b9\" (UniqueName: \"kubernetes.io/projected/908ba466-3385-45bb-8c51-22e8142da678-kube-api-access-969b9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:42 crc kubenswrapper[5050]: I0131 06:01:42.558664 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:43 crc kubenswrapper[5050]: I0131 06:01:43.113420 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc"] Jan 31 06:01:43 crc kubenswrapper[5050]: I0131 06:01:43.173987 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" event={"ID":"908ba466-3385-45bb-8c51-22e8142da678","Type":"ContainerStarted","Data":"7e2a48d70d54435f4d86c4cb9f572dc1c55262b72ba1ed290ecc1ced883f60f0"} Jan 31 06:01:44 crc kubenswrapper[5050]: I0131 06:01:44.182689 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" event={"ID":"908ba466-3385-45bb-8c51-22e8142da678","Type":"ContainerStarted","Data":"5574a1fad03e6a8588c87339a172c7f5b49d439f2e5179956614acc15fd61df8"} Jan 31 06:01:44 crc kubenswrapper[5050]: I0131 06:01:44.203715 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" podStartSLOduration=1.753294616 podStartE2EDuration="2.203694587s" podCreationTimestamp="2026-01-31 06:01:42 +0000 UTC" firstStartedPulling="2026-01-31 06:01:43.103009653 +0000 UTC m=+2428.152171269" lastFinishedPulling="2026-01-31 06:01:43.553409644 +0000 UTC m=+2428.602571240" observedRunningTime="2026-01-31 06:01:44.20115342 +0000 UTC m=+2429.250315016" watchObservedRunningTime="2026-01-31 06:01:44.203694587 +0000 UTC m=+2429.252856183" Jan 31 06:01:53 crc kubenswrapper[5050]: I0131 06:01:53.256832 5050 generic.go:334] "Generic (PLEG): container finished" podID="908ba466-3385-45bb-8c51-22e8142da678" containerID="5574a1fad03e6a8588c87339a172c7f5b49d439f2e5179956614acc15fd61df8" exitCode=0 Jan 31 06:01:53 crc kubenswrapper[5050]: I0131 06:01:53.256912 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" event={"ID":"908ba466-3385-45bb-8c51-22e8142da678","Type":"ContainerDied","Data":"5574a1fad03e6a8588c87339a172c7f5b49d439f2e5179956614acc15fd61df8"} Jan 31 06:01:53 crc kubenswrapper[5050]: I0131 06:01:53.737717 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:01:53 crc kubenswrapper[5050]: E0131 06:01:53.738175 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.675104 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.829130 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-969b9\" (UniqueName: \"kubernetes.io/projected/908ba466-3385-45bb-8c51-22e8142da678-kube-api-access-969b9\") pod \"908ba466-3385-45bb-8c51-22e8142da678\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.829208 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-ssh-key-openstack-edpm-ipam\") pod \"908ba466-3385-45bb-8c51-22e8142da678\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.829251 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-ceph\") pod \"908ba466-3385-45bb-8c51-22e8142da678\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.829356 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-inventory\") pod \"908ba466-3385-45bb-8c51-22e8142da678\" (UID: \"908ba466-3385-45bb-8c51-22e8142da678\") " Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.838235 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-ceph" (OuterVolumeSpecName: "ceph") pod "908ba466-3385-45bb-8c51-22e8142da678" (UID: "908ba466-3385-45bb-8c51-22e8142da678"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.851234 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908ba466-3385-45bb-8c51-22e8142da678-kube-api-access-969b9" (OuterVolumeSpecName: "kube-api-access-969b9") pod "908ba466-3385-45bb-8c51-22e8142da678" (UID: "908ba466-3385-45bb-8c51-22e8142da678"). InnerVolumeSpecName "kube-api-access-969b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.874349 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "908ba466-3385-45bb-8c51-22e8142da678" (UID: "908ba466-3385-45bb-8c51-22e8142da678"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.887182 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-inventory" (OuterVolumeSpecName: "inventory") pod "908ba466-3385-45bb-8c51-22e8142da678" (UID: "908ba466-3385-45bb-8c51-22e8142da678"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.931248 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-969b9\" (UniqueName: \"kubernetes.io/projected/908ba466-3385-45bb-8c51-22e8142da678-kube-api-access-969b9\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.931279 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.931288 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:54 crc kubenswrapper[5050]: I0131 06:01:54.931297 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908ba466-3385-45bb-8c51-22e8142da678-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.292514 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" event={"ID":"908ba466-3385-45bb-8c51-22e8142da678","Type":"ContainerDied","Data":"7e2a48d70d54435f4d86c4cb9f572dc1c55262b72ba1ed290ecc1ced883f60f0"} Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.293052 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e2a48d70d54435f4d86c4cb9f572dc1c55262b72ba1ed290ecc1ced883f60f0" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.292922 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.384671 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx"] Jan 31 06:01:55 crc kubenswrapper[5050]: E0131 06:01:55.386357 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908ba466-3385-45bb-8c51-22e8142da678" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.386396 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="908ba466-3385-45bb-8c51-22e8142da678" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.386904 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="908ba466-3385-45bb-8c51-22e8142da678" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.388016 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.390992 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.392254 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.392510 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.392756 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.393015 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.393203 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.397190 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.397194 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.401658 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx"] Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.543575 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8m25\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-kube-api-access-w8m25\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.543675 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.543726 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.543792 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.543834 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.543868 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.543970 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.544028 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.544065 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.544104 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.544142 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.544188 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.544233 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646211 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646288 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646327 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646363 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646389 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646423 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646455 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646563 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8m25\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-kube-api-access-w8m25\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646615 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646652 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646691 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646717 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.646739 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.652686 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.659235 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.659417 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.660155 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.660190 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.660404 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.661012 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.662007 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.662240 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.664050 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.683529 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.687902 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8m25\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-kube-api-access-w8m25\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.688297 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-th4qx\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:55 crc kubenswrapper[5050]: I0131 06:01:55.711854 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:01:56 crc kubenswrapper[5050]: I0131 06:01:56.049887 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx"] Jan 31 06:01:56 crc kubenswrapper[5050]: I0131 06:01:56.301344 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" event={"ID":"af62f2ea-1f56-4d1f-91ce-06b83ca439e6","Type":"ContainerStarted","Data":"5d3f553745218c750540aa3260e5249b90a23cd6c45fd94e1085b88a5ba3051b"} Jan 31 06:01:57 crc kubenswrapper[5050]: I0131 06:01:57.318245 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" event={"ID":"af62f2ea-1f56-4d1f-91ce-06b83ca439e6","Type":"ContainerStarted","Data":"e2e1083b3813388cb4c7b1984ea0857dafc2492f49c592d0cce5c4f43a9a4186"} Jan 31 06:01:57 crc kubenswrapper[5050]: I0131 06:01:57.341926 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" podStartSLOduration=1.514321261 podStartE2EDuration="2.341906155s" podCreationTimestamp="2026-01-31 06:01:55 +0000 UTC" firstStartedPulling="2026-01-31 06:01:56.049617359 +0000 UTC m=+2441.098778955" lastFinishedPulling="2026-01-31 06:01:56.877202253 +0000 UTC m=+2441.926363849" observedRunningTime="2026-01-31 06:01:57.338430383 +0000 UTC m=+2442.387591969" watchObservedRunningTime="2026-01-31 06:01:57.341906155 +0000 UTC m=+2442.391067751" Jan 31 06:02:08 crc kubenswrapper[5050]: I0131 06:02:08.736756 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:02:08 crc kubenswrapper[5050]: E0131 06:02:08.737617 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:02:22 crc kubenswrapper[5050]: I0131 06:02:22.737097 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:02:22 crc kubenswrapper[5050]: E0131 06:02:22.737865 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:02:27 crc kubenswrapper[5050]: I0131 06:02:27.568701 5050 generic.go:334] "Generic (PLEG): container finished" podID="af62f2ea-1f56-4d1f-91ce-06b83ca439e6" containerID="e2e1083b3813388cb4c7b1984ea0857dafc2492f49c592d0cce5c4f43a9a4186" exitCode=0 Jan 31 06:02:27 crc kubenswrapper[5050]: I0131 06:02:27.568802 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" event={"ID":"af62f2ea-1f56-4d1f-91ce-06b83ca439e6","Type":"ContainerDied","Data":"e2e1083b3813388cb4c7b1984ea0857dafc2492f49c592d0cce5c4f43a9a4186"} Jan 31 06:02:28 crc kubenswrapper[5050]: I0131 06:02:28.971679 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.003798 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-repo-setup-combined-ca-bundle\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.003854 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ssh-key-openstack-edpm-ipam\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.003882 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ovn-combined-ca-bundle\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.003912 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ceph\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.003967 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.004031 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-bootstrap-combined-ca-bundle\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.004062 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.004147 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.004176 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-neutron-metadata-combined-ca-bundle\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.004232 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-nova-combined-ca-bundle\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.004270 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-inventory\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.004325 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-libvirt-combined-ca-bundle\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.004435 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8m25\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-kube-api-access-w8m25\") pod \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\" (UID: \"af62f2ea-1f56-4d1f-91ce-06b83ca439e6\") " Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.011645 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.012223 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.013266 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.013568 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.014715 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.015582 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.016257 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.020592 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.021318 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ceph" (OuterVolumeSpecName: "ceph") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.023043 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.023502 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-kube-api-access-w8m25" (OuterVolumeSpecName: "kube-api-access-w8m25") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "kube-api-access-w8m25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.041083 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-inventory" (OuterVolumeSpecName: "inventory") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.041733 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "af62f2ea-1f56-4d1f-91ce-06b83ca439e6" (UID: "af62f2ea-1f56-4d1f-91ce-06b83ca439e6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106023 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8m25\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-kube-api-access-w8m25\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106063 5050 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106077 5050 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106093 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106104 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106117 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106129 5050 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106140 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106152 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106166 5050 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106177 5050 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106190 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.106200 5050 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af62f2ea-1f56-4d1f-91ce-06b83ca439e6-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.585456 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" event={"ID":"af62f2ea-1f56-4d1f-91ce-06b83ca439e6","Type":"ContainerDied","Data":"5d3f553745218c750540aa3260e5249b90a23cd6c45fd94e1085b88a5ba3051b"} Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.585507 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d3f553745218c750540aa3260e5249b90a23cd6c45fd94e1085b88a5ba3051b" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.585588 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-th4qx" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.691449 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4"] Jan 31 06:02:29 crc kubenswrapper[5050]: E0131 06:02:29.692226 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af62f2ea-1f56-4d1f-91ce-06b83ca439e6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.692257 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="af62f2ea-1f56-4d1f-91ce-06b83ca439e6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.692416 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="af62f2ea-1f56-4d1f-91ce-06b83ca439e6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.693082 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.696281 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.696304 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.696349 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.698061 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.703574 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4"] Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.706230 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.719603 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.719661 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.719714 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rln\" (UniqueName: \"kubernetes.io/projected/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-kube-api-access-p4rln\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.719777 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.822382 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.822735 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.822784 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rln\" (UniqueName: \"kubernetes.io/projected/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-kube-api-access-p4rln\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.822860 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.827559 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.827573 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.827573 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:29 crc kubenswrapper[5050]: I0131 06:02:29.838228 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rln\" (UniqueName: \"kubernetes.io/projected/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-kube-api-access-p4rln\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:30 crc kubenswrapper[5050]: I0131 06:02:30.012104 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:30 crc kubenswrapper[5050]: I0131 06:02:30.587625 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4"] Jan 31 06:02:30 crc kubenswrapper[5050]: I0131 06:02:30.606034 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" event={"ID":"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3","Type":"ContainerStarted","Data":"5029cab6372d53ff9e642d926e3e275e2a038537454cef9b1c5179ad6bee455d"} Jan 31 06:02:32 crc kubenswrapper[5050]: I0131 06:02:32.620077 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" event={"ID":"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3","Type":"ContainerStarted","Data":"71991299d2c2eb1b355f63de926b8353ef54d541012b719e2c2ff48ab6b0967e"} Jan 31 06:02:32 crc kubenswrapper[5050]: I0131 06:02:32.641721 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" podStartSLOduration=2.1846392 podStartE2EDuration="3.641702953s" podCreationTimestamp="2026-01-31 06:02:29 +0000 UTC" firstStartedPulling="2026-01-31 06:02:30.592178517 +0000 UTC m=+2475.641340123" lastFinishedPulling="2026-01-31 06:02:32.04924228 +0000 UTC m=+2477.098403876" observedRunningTime="2026-01-31 06:02:32.634474401 +0000 UTC m=+2477.683635997" watchObservedRunningTime="2026-01-31 06:02:32.641702953 +0000 UTC m=+2477.690864539" Jan 31 06:02:35 crc kubenswrapper[5050]: I0131 06:02:35.745164 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:02:35 crc kubenswrapper[5050]: E0131 06:02:35.747971 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:02:37 crc kubenswrapper[5050]: I0131 06:02:37.693268 5050 generic.go:334] "Generic (PLEG): container finished" podID="7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3" containerID="71991299d2c2eb1b355f63de926b8353ef54d541012b719e2c2ff48ab6b0967e" exitCode=0 Jan 31 06:02:37 crc kubenswrapper[5050]: I0131 06:02:37.693350 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" event={"ID":"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3","Type":"ContainerDied","Data":"71991299d2c2eb1b355f63de926b8353ef54d541012b719e2c2ff48ab6b0967e"} Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.218153 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.302759 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-ssh-key-openstack-edpm-ipam\") pod \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.302866 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4rln\" (UniqueName: \"kubernetes.io/projected/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-kube-api-access-p4rln\") pod \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.302913 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-inventory\") pod \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.302948 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-ceph\") pod \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\" (UID: \"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3\") " Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.312174 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-ceph" (OuterVolumeSpecName: "ceph") pod "7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3" (UID: "7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.321749 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-kube-api-access-p4rln" (OuterVolumeSpecName: "kube-api-access-p4rln") pod "7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3" (UID: "7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3"). InnerVolumeSpecName "kube-api-access-p4rln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.335701 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3" (UID: "7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.339993 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-inventory" (OuterVolumeSpecName: "inventory") pod "7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3" (UID: "7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.406258 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.406311 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.406331 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.406352 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4rln\" (UniqueName: \"kubernetes.io/projected/7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3-kube-api-access-p4rln\") on node \"crc\" DevicePath \"\"" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.715370 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" event={"ID":"7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3","Type":"ContainerDied","Data":"5029cab6372d53ff9e642d926e3e275e2a038537454cef9b1c5179ad6bee455d"} Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.715698 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5029cab6372d53ff9e642d926e3e275e2a038537454cef9b1c5179ad6bee455d" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.715465 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.806599 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v"] Jan 31 06:02:39 crc kubenswrapper[5050]: E0131 06:02:39.806990 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.807008 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.807181 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.807933 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.809708 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.810176 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.810321 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.810419 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.810557 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.810598 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.819131 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v"] Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.913831 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.913911 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.914252 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.914317 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.914556 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:39 crc kubenswrapper[5050]: I0131 06:02:39.914712 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65vn5\" (UniqueName: \"kubernetes.io/projected/6fcd0150-c73a-45de-ab72-f6e05ff00b42-kube-api-access-65vn5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.016428 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.016494 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.016606 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.016694 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65vn5\" (UniqueName: \"kubernetes.io/projected/6fcd0150-c73a-45de-ab72-f6e05ff00b42-kube-api-access-65vn5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.016743 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.016789 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.018615 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.020764 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.021103 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.021448 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.022705 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.038155 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65vn5\" (UniqueName: \"kubernetes.io/projected/6fcd0150-c73a-45de-ab72-f6e05ff00b42-kube-api-access-65vn5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6cs7v\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.124137 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.697144 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v"] Jan 31 06:02:40 crc kubenswrapper[5050]: I0131 06:02:40.738132 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" event={"ID":"6fcd0150-c73a-45de-ab72-f6e05ff00b42","Type":"ContainerStarted","Data":"da0e7821d096689c51f5accb32bdf7716cddbcb944d88d0713d6b942da6ecb20"} Jan 31 06:02:41 crc kubenswrapper[5050]: I0131 06:02:41.755557 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" event={"ID":"6fcd0150-c73a-45de-ab72-f6e05ff00b42","Type":"ContainerStarted","Data":"a2a145ac1afef45f11b0b475b28b4f8ea2062085d672f49edb4dfd8599a31fc8"} Jan 31 06:02:41 crc kubenswrapper[5050]: I0131 06:02:41.783406 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" podStartSLOduration=2.280002636 podStartE2EDuration="2.783380947s" podCreationTimestamp="2026-01-31 06:02:39 +0000 UTC" firstStartedPulling="2026-01-31 06:02:40.706648791 +0000 UTC m=+2485.755810387" lastFinishedPulling="2026-01-31 06:02:41.210027102 +0000 UTC m=+2486.259188698" observedRunningTime="2026-01-31 06:02:41.774265194 +0000 UTC m=+2486.823426800" watchObservedRunningTime="2026-01-31 06:02:41.783380947 +0000 UTC m=+2486.832542563" Jan 31 06:02:48 crc kubenswrapper[5050]: I0131 06:02:48.736687 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:02:48 crc kubenswrapper[5050]: E0131 06:02:48.737522 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:03:01 crc kubenswrapper[5050]: I0131 06:03:01.736717 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:03:01 crc kubenswrapper[5050]: E0131 06:03:01.737625 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:03:12 crc kubenswrapper[5050]: I0131 06:03:12.736909 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:03:12 crc kubenswrapper[5050]: E0131 06:03:12.737813 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:03:25 crc kubenswrapper[5050]: I0131 06:03:25.750068 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:03:25 crc kubenswrapper[5050]: E0131 06:03:25.751000 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.025800 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h72nc"] Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.028390 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.062249 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h72nc"] Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.203469 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69wlm\" (UniqueName: \"kubernetes.io/projected/60ff9b39-0846-40da-b771-87cac92e390e-kube-api-access-69wlm\") pod \"redhat-operators-h72nc\" (UID: \"60ff9b39-0846-40da-b771-87cac92e390e\") " pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.203603 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ff9b39-0846-40da-b771-87cac92e390e-catalog-content\") pod \"redhat-operators-h72nc\" (UID: \"60ff9b39-0846-40da-b771-87cac92e390e\") " pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.203634 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ff9b39-0846-40da-b771-87cac92e390e-utilities\") pod \"redhat-operators-h72nc\" (UID: \"60ff9b39-0846-40da-b771-87cac92e390e\") " pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.305836 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ff9b39-0846-40da-b771-87cac92e390e-catalog-content\") pod \"redhat-operators-h72nc\" (UID: \"60ff9b39-0846-40da-b771-87cac92e390e\") " pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.305968 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ff9b39-0846-40da-b771-87cac92e390e-utilities\") pod \"redhat-operators-h72nc\" (UID: \"60ff9b39-0846-40da-b771-87cac92e390e\") " pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.306112 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69wlm\" (UniqueName: \"kubernetes.io/projected/60ff9b39-0846-40da-b771-87cac92e390e-kube-api-access-69wlm\") pod \"redhat-operators-h72nc\" (UID: \"60ff9b39-0846-40da-b771-87cac92e390e\") " pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.306577 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ff9b39-0846-40da-b771-87cac92e390e-utilities\") pod \"redhat-operators-h72nc\" (UID: \"60ff9b39-0846-40da-b771-87cac92e390e\") " pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.306577 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ff9b39-0846-40da-b771-87cac92e390e-catalog-content\") pod \"redhat-operators-h72nc\" (UID: \"60ff9b39-0846-40da-b771-87cac92e390e\") " pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.328850 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69wlm\" (UniqueName: \"kubernetes.io/projected/60ff9b39-0846-40da-b771-87cac92e390e-kube-api-access-69wlm\") pod \"redhat-operators-h72nc\" (UID: \"60ff9b39-0846-40da-b771-87cac92e390e\") " pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.361264 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:27 crc kubenswrapper[5050]: I0131 06:03:27.862583 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h72nc"] Jan 31 06:03:28 crc kubenswrapper[5050]: I0131 06:03:28.175355 5050 generic.go:334] "Generic (PLEG): container finished" podID="60ff9b39-0846-40da-b771-87cac92e390e" containerID="b9cba0da2ade9707fe1ac8c582409c36ef75e2bd8b7ce7fb7dfc6628ff188005" exitCode=0 Jan 31 06:03:28 crc kubenswrapper[5050]: I0131 06:03:28.175457 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h72nc" event={"ID":"60ff9b39-0846-40da-b771-87cac92e390e","Type":"ContainerDied","Data":"b9cba0da2ade9707fe1ac8c582409c36ef75e2bd8b7ce7fb7dfc6628ff188005"} Jan 31 06:03:28 crc kubenswrapper[5050]: I0131 06:03:28.176583 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h72nc" event={"ID":"60ff9b39-0846-40da-b771-87cac92e390e","Type":"ContainerStarted","Data":"ab6170d74c083e7fe41589eaa6fe1b1faf17e23fa62a7c448e129439cb88af55"} Jan 31 06:03:29 crc kubenswrapper[5050]: I0131 06:03:29.187889 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h72nc" event={"ID":"60ff9b39-0846-40da-b771-87cac92e390e","Type":"ContainerStarted","Data":"4d21cb6210fe8ee9e7bd05711c3393eebdb2f1e7297a46cd5f95da37a7f1a635"} Jan 31 06:03:30 crc kubenswrapper[5050]: I0131 06:03:30.199571 5050 generic.go:334] "Generic (PLEG): container finished" podID="60ff9b39-0846-40da-b771-87cac92e390e" containerID="4d21cb6210fe8ee9e7bd05711c3393eebdb2f1e7297a46cd5f95da37a7f1a635" exitCode=0 Jan 31 06:03:30 crc kubenswrapper[5050]: I0131 06:03:30.199678 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h72nc" event={"ID":"60ff9b39-0846-40da-b771-87cac92e390e","Type":"ContainerDied","Data":"4d21cb6210fe8ee9e7bd05711c3393eebdb2f1e7297a46cd5f95da37a7f1a635"} Jan 31 06:03:32 crc kubenswrapper[5050]: I0131 06:03:32.234119 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h72nc" event={"ID":"60ff9b39-0846-40da-b771-87cac92e390e","Type":"ContainerStarted","Data":"150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29"} Jan 31 06:03:32 crc kubenswrapper[5050]: I0131 06:03:32.266174 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h72nc" podStartSLOduration=3.424824176 podStartE2EDuration="6.266145672s" podCreationTimestamp="2026-01-31 06:03:26 +0000 UTC" firstStartedPulling="2026-01-31 06:03:28.176751808 +0000 UTC m=+2533.225913404" lastFinishedPulling="2026-01-31 06:03:31.018073304 +0000 UTC m=+2536.067234900" observedRunningTime="2026-01-31 06:03:32.256873326 +0000 UTC m=+2537.306034952" watchObservedRunningTime="2026-01-31 06:03:32.266145672 +0000 UTC m=+2537.315307288" Jan 31 06:03:37 crc kubenswrapper[5050]: I0131 06:03:37.362723 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:37 crc kubenswrapper[5050]: I0131 06:03:37.363217 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:38 crc kubenswrapper[5050]: I0131 06:03:38.414705 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h72nc" podUID="60ff9b39-0846-40da-b771-87cac92e390e" containerName="registry-server" probeResult="failure" output=< Jan 31 06:03:38 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 06:03:38 crc kubenswrapper[5050]: > Jan 31 06:03:38 crc kubenswrapper[5050]: I0131 06:03:38.736743 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:03:38 crc kubenswrapper[5050]: E0131 06:03:38.737382 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:03:47 crc kubenswrapper[5050]: I0131 06:03:47.426767 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:47 crc kubenswrapper[5050]: I0131 06:03:47.474461 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:47 crc kubenswrapper[5050]: I0131 06:03:47.662833 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h72nc"] Jan 31 06:03:49 crc kubenswrapper[5050]: I0131 06:03:49.407741 5050 generic.go:334] "Generic (PLEG): container finished" podID="6fcd0150-c73a-45de-ab72-f6e05ff00b42" containerID="a2a145ac1afef45f11b0b475b28b4f8ea2062085d672f49edb4dfd8599a31fc8" exitCode=0 Jan 31 06:03:49 crc kubenswrapper[5050]: I0131 06:03:49.408430 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h72nc" podUID="60ff9b39-0846-40da-b771-87cac92e390e" containerName="registry-server" containerID="cri-o://150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29" gracePeriod=2 Jan 31 06:03:49 crc kubenswrapper[5050]: I0131 06:03:49.407799 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" event={"ID":"6fcd0150-c73a-45de-ab72-f6e05ff00b42","Type":"ContainerDied","Data":"a2a145ac1afef45f11b0b475b28b4f8ea2062085d672f49edb4dfd8599a31fc8"} Jan 31 06:03:49 crc kubenswrapper[5050]: I0131 06:03:49.840982 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:49 crc kubenswrapper[5050]: I0131 06:03:49.945317 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ff9b39-0846-40da-b771-87cac92e390e-catalog-content\") pod \"60ff9b39-0846-40da-b771-87cac92e390e\" (UID: \"60ff9b39-0846-40da-b771-87cac92e390e\") " Jan 31 06:03:49 crc kubenswrapper[5050]: I0131 06:03:49.945405 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ff9b39-0846-40da-b771-87cac92e390e-utilities\") pod \"60ff9b39-0846-40da-b771-87cac92e390e\" (UID: \"60ff9b39-0846-40da-b771-87cac92e390e\") " Jan 31 06:03:49 crc kubenswrapper[5050]: I0131 06:03:49.945515 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69wlm\" (UniqueName: \"kubernetes.io/projected/60ff9b39-0846-40da-b771-87cac92e390e-kube-api-access-69wlm\") pod \"60ff9b39-0846-40da-b771-87cac92e390e\" (UID: \"60ff9b39-0846-40da-b771-87cac92e390e\") " Jan 31 06:03:49 crc kubenswrapper[5050]: I0131 06:03:49.946754 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ff9b39-0846-40da-b771-87cac92e390e-utilities" (OuterVolumeSpecName: "utilities") pod "60ff9b39-0846-40da-b771-87cac92e390e" (UID: "60ff9b39-0846-40da-b771-87cac92e390e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:03:49 crc kubenswrapper[5050]: I0131 06:03:49.954191 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ff9b39-0846-40da-b771-87cac92e390e-kube-api-access-69wlm" (OuterVolumeSpecName: "kube-api-access-69wlm") pod "60ff9b39-0846-40da-b771-87cac92e390e" (UID: "60ff9b39-0846-40da-b771-87cac92e390e"). InnerVolumeSpecName "kube-api-access-69wlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.048249 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ff9b39-0846-40da-b771-87cac92e390e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.048297 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69wlm\" (UniqueName: \"kubernetes.io/projected/60ff9b39-0846-40da-b771-87cac92e390e-kube-api-access-69wlm\") on node \"crc\" DevicePath \"\"" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.060351 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ff9b39-0846-40da-b771-87cac92e390e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60ff9b39-0846-40da-b771-87cac92e390e" (UID: "60ff9b39-0846-40da-b771-87cac92e390e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.149707 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ff9b39-0846-40da-b771-87cac92e390e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.417602 5050 generic.go:334] "Generic (PLEG): container finished" podID="60ff9b39-0846-40da-b771-87cac92e390e" containerID="150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29" exitCode=0 Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.417691 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h72nc" event={"ID":"60ff9b39-0846-40da-b771-87cac92e390e","Type":"ContainerDied","Data":"150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29"} Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.418503 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h72nc" event={"ID":"60ff9b39-0846-40da-b771-87cac92e390e","Type":"ContainerDied","Data":"ab6170d74c083e7fe41589eaa6fe1b1faf17e23fa62a7c448e129439cb88af55"} Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.418548 5050 scope.go:117] "RemoveContainer" containerID="150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.417744 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h72nc" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.459161 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h72nc"] Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.461463 5050 scope.go:117] "RemoveContainer" containerID="4d21cb6210fe8ee9e7bd05711c3393eebdb2f1e7297a46cd5f95da37a7f1a635" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.467008 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h72nc"] Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.493232 5050 scope.go:117] "RemoveContainer" containerID="b9cba0da2ade9707fe1ac8c582409c36ef75e2bd8b7ce7fb7dfc6628ff188005" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.538716 5050 scope.go:117] "RemoveContainer" containerID="150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29" Jan 31 06:03:50 crc kubenswrapper[5050]: E0131 06:03:50.539339 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29\": container with ID starting with 150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29 not found: ID does not exist" containerID="150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.539369 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29"} err="failed to get container status \"150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29\": rpc error: code = NotFound desc = could not find container \"150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29\": container with ID starting with 150d7118459e2eba315fad8633fdadb566188087dc3afaae376330c5642bea29 not found: ID does not exist" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.539389 5050 scope.go:117] "RemoveContainer" containerID="4d21cb6210fe8ee9e7bd05711c3393eebdb2f1e7297a46cd5f95da37a7f1a635" Jan 31 06:03:50 crc kubenswrapper[5050]: E0131 06:03:50.539653 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d21cb6210fe8ee9e7bd05711c3393eebdb2f1e7297a46cd5f95da37a7f1a635\": container with ID starting with 4d21cb6210fe8ee9e7bd05711c3393eebdb2f1e7297a46cd5f95da37a7f1a635 not found: ID does not exist" containerID="4d21cb6210fe8ee9e7bd05711c3393eebdb2f1e7297a46cd5f95da37a7f1a635" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.539699 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d21cb6210fe8ee9e7bd05711c3393eebdb2f1e7297a46cd5f95da37a7f1a635"} err="failed to get container status \"4d21cb6210fe8ee9e7bd05711c3393eebdb2f1e7297a46cd5f95da37a7f1a635\": rpc error: code = NotFound desc = could not find container \"4d21cb6210fe8ee9e7bd05711c3393eebdb2f1e7297a46cd5f95da37a7f1a635\": container with ID starting with 4d21cb6210fe8ee9e7bd05711c3393eebdb2f1e7297a46cd5f95da37a7f1a635 not found: ID does not exist" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.539717 5050 scope.go:117] "RemoveContainer" containerID="b9cba0da2ade9707fe1ac8c582409c36ef75e2bd8b7ce7fb7dfc6628ff188005" Jan 31 06:03:50 crc kubenswrapper[5050]: E0131 06:03:50.539992 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9cba0da2ade9707fe1ac8c582409c36ef75e2bd8b7ce7fb7dfc6628ff188005\": container with ID starting with b9cba0da2ade9707fe1ac8c582409c36ef75e2bd8b7ce7fb7dfc6628ff188005 not found: ID does not exist" containerID="b9cba0da2ade9707fe1ac8c582409c36ef75e2bd8b7ce7fb7dfc6628ff188005" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.540016 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9cba0da2ade9707fe1ac8c582409c36ef75e2bd8b7ce7fb7dfc6628ff188005"} err="failed to get container status \"b9cba0da2ade9707fe1ac8c582409c36ef75e2bd8b7ce7fb7dfc6628ff188005\": rpc error: code = NotFound desc = could not find container \"b9cba0da2ade9707fe1ac8c582409c36ef75e2bd8b7ce7fb7dfc6628ff188005\": container with ID starting with b9cba0da2ade9707fe1ac8c582409c36ef75e2bd8b7ce7fb7dfc6628ff188005 not found: ID does not exist" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.822790 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.864126 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ovn-combined-ca-bundle\") pod \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.864187 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ceph\") pod \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.864227 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ssh-key-openstack-edpm-ipam\") pod \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.864263 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ovncontroller-config-0\") pod \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.864406 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-inventory\") pod \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.864480 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65vn5\" (UniqueName: \"kubernetes.io/projected/6fcd0150-c73a-45de-ab72-f6e05ff00b42-kube-api-access-65vn5\") pod \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\" (UID: \"6fcd0150-c73a-45de-ab72-f6e05ff00b42\") " Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.872079 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "6fcd0150-c73a-45de-ab72-f6e05ff00b42" (UID: "6fcd0150-c73a-45de-ab72-f6e05ff00b42"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.872158 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ceph" (OuterVolumeSpecName: "ceph") pod "6fcd0150-c73a-45de-ab72-f6e05ff00b42" (UID: "6fcd0150-c73a-45de-ab72-f6e05ff00b42"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.876782 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fcd0150-c73a-45de-ab72-f6e05ff00b42-kube-api-access-65vn5" (OuterVolumeSpecName: "kube-api-access-65vn5") pod "6fcd0150-c73a-45de-ab72-f6e05ff00b42" (UID: "6fcd0150-c73a-45de-ab72-f6e05ff00b42"). InnerVolumeSpecName "kube-api-access-65vn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.888574 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-inventory" (OuterVolumeSpecName: "inventory") pod "6fcd0150-c73a-45de-ab72-f6e05ff00b42" (UID: "6fcd0150-c73a-45de-ab72-f6e05ff00b42"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.888616 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "6fcd0150-c73a-45de-ab72-f6e05ff00b42" (UID: "6fcd0150-c73a-45de-ab72-f6e05ff00b42"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.889792 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6fcd0150-c73a-45de-ab72-f6e05ff00b42" (UID: "6fcd0150-c73a-45de-ab72-f6e05ff00b42"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.965764 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65vn5\" (UniqueName: \"kubernetes.io/projected/6fcd0150-c73a-45de-ab72-f6e05ff00b42-kube-api-access-65vn5\") on node \"crc\" DevicePath \"\"" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.965815 5050 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.965838 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.965856 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.965874 5050 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6fcd0150-c73a-45de-ab72-f6e05ff00b42-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 06:03:50 crc kubenswrapper[5050]: I0131 06:03:50.965890 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fcd0150-c73a-45de-ab72-f6e05ff00b42-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.430379 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" event={"ID":"6fcd0150-c73a-45de-ab72-f6e05ff00b42","Type":"ContainerDied","Data":"da0e7821d096689c51f5accb32bdf7716cddbcb944d88d0713d6b942da6ecb20"} Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.430698 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da0e7821d096689c51f5accb32bdf7716cddbcb944d88d0713d6b942da6ecb20" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.430394 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6cs7v" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.528450 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977"] Jan 31 06:03:51 crc kubenswrapper[5050]: E0131 06:03:51.528812 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcd0150-c73a-45de-ab72-f6e05ff00b42" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.528831 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcd0150-c73a-45de-ab72-f6e05ff00b42" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 06:03:51 crc kubenswrapper[5050]: E0131 06:03:51.528841 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ff9b39-0846-40da-b771-87cac92e390e" containerName="extract-content" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.528848 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ff9b39-0846-40da-b771-87cac92e390e" containerName="extract-content" Jan 31 06:03:51 crc kubenswrapper[5050]: E0131 06:03:51.528889 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ff9b39-0846-40da-b771-87cac92e390e" containerName="extract-utilities" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.528897 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ff9b39-0846-40da-b771-87cac92e390e" containerName="extract-utilities" Jan 31 06:03:51 crc kubenswrapper[5050]: E0131 06:03:51.528913 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ff9b39-0846-40da-b771-87cac92e390e" containerName="registry-server" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.528919 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ff9b39-0846-40da-b771-87cac92e390e" containerName="registry-server" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.529112 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ff9b39-0846-40da-b771-87cac92e390e" containerName="registry-server" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.529151 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcd0150-c73a-45de-ab72-f6e05ff00b42" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.529811 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.532073 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.532101 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.532332 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.532879 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.533374 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.533821 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.538502 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.542653 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977"] Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.677091 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.677138 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl26x\" (UniqueName: \"kubernetes.io/projected/e08dc69a-2a62-4fdd-878d-88468fec4ef0-kube-api-access-sl26x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.677166 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.677298 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.677350 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.677388 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.677459 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.746687 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ff9b39-0846-40da-b771-87cac92e390e" path="/var/lib/kubelet/pods/60ff9b39-0846-40da-b771-87cac92e390e/volumes" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.779278 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.779361 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.779511 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.779561 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.779587 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl26x\" (UniqueName: \"kubernetes.io/projected/e08dc69a-2a62-4fdd-878d-88468fec4ef0-kube-api-access-sl26x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.779612 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.780281 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.786266 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.786767 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.787012 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.789119 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:51 crc kubenswrapper[5050]: I0131 06:03:51.789695 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:52 crc kubenswrapper[5050]: I0131 06:03:52.068619 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:52 crc kubenswrapper[5050]: I0131 06:03:52.082006 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl26x\" (UniqueName: \"kubernetes.io/projected/e08dc69a-2a62-4fdd-878d-88468fec4ef0-kube-api-access-sl26x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:52 crc kubenswrapper[5050]: I0131 06:03:52.194791 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:03:52 crc kubenswrapper[5050]: I0131 06:03:52.730614 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977"] Jan 31 06:03:52 crc kubenswrapper[5050]: I0131 06:03:52.737196 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:03:52 crc kubenswrapper[5050]: E0131 06:03:52.737498 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:03:53 crc kubenswrapper[5050]: I0131 06:03:53.447681 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" event={"ID":"e08dc69a-2a62-4fdd-878d-88468fec4ef0","Type":"ContainerStarted","Data":"fe91169fb13496b8c8a2cefe871e9e3d557a922cf328b3bac8de390ea1f993a6"} Jan 31 06:03:55 crc kubenswrapper[5050]: I0131 06:03:55.463597 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" event={"ID":"e08dc69a-2a62-4fdd-878d-88468fec4ef0","Type":"ContainerStarted","Data":"af9108c781a30d9be3931074cb46971bae35a2fa1a8dec5a217fbbd70418ed2a"} Jan 31 06:03:55 crc kubenswrapper[5050]: I0131 06:03:55.487500 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" podStartSLOduration=2.846381537 podStartE2EDuration="4.487480929s" podCreationTimestamp="2026-01-31 06:03:51 +0000 UTC" firstStartedPulling="2026-01-31 06:03:52.767133943 +0000 UTC m=+2557.816295539" lastFinishedPulling="2026-01-31 06:03:54.408233325 +0000 UTC m=+2559.457394931" observedRunningTime="2026-01-31 06:03:55.479821135 +0000 UTC m=+2560.528982731" watchObservedRunningTime="2026-01-31 06:03:55.487480929 +0000 UTC m=+2560.536642525" Jan 31 06:04:04 crc kubenswrapper[5050]: I0131 06:04:04.737507 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:04:04 crc kubenswrapper[5050]: E0131 06:04:04.738401 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:04:15 crc kubenswrapper[5050]: I0131 06:04:15.743914 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:04:16 crc kubenswrapper[5050]: I0131 06:04:16.631425 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"12f46c93a11a7b4f1ba5bc425839337056f564468bfed7e5d291893705d36b8b"} Jan 31 06:04:51 crc kubenswrapper[5050]: I0131 06:04:51.934883 5050 generic.go:334] "Generic (PLEG): container finished" podID="e08dc69a-2a62-4fdd-878d-88468fec4ef0" containerID="af9108c781a30d9be3931074cb46971bae35a2fa1a8dec5a217fbbd70418ed2a" exitCode=0 Jan 31 06:04:51 crc kubenswrapper[5050]: I0131 06:04:51.934981 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" event={"ID":"e08dc69a-2a62-4fdd-878d-88468fec4ef0","Type":"ContainerDied","Data":"af9108c781a30d9be3931074cb46971bae35a2fa1a8dec5a217fbbd70418ed2a"} Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.341452 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.375453 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.375506 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-ceph\") pod \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.375563 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-nova-metadata-neutron-config-0\") pod \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.375618 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-ssh-key-openstack-edpm-ipam\") pod \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.375652 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-inventory\") pod \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.375749 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-neutron-metadata-combined-ca-bundle\") pod \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.375768 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl26x\" (UniqueName: \"kubernetes.io/projected/e08dc69a-2a62-4fdd-878d-88468fec4ef0-kube-api-access-sl26x\") pod \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\" (UID: \"e08dc69a-2a62-4fdd-878d-88468fec4ef0\") " Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.386430 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-ceph" (OuterVolumeSpecName: "ceph") pod "e08dc69a-2a62-4fdd-878d-88468fec4ef0" (UID: "e08dc69a-2a62-4fdd-878d-88468fec4ef0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.387990 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e08dc69a-2a62-4fdd-878d-88468fec4ef0" (UID: "e08dc69a-2a62-4fdd-878d-88468fec4ef0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.389079 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e08dc69a-2a62-4fdd-878d-88468fec4ef0-kube-api-access-sl26x" (OuterVolumeSpecName: "kube-api-access-sl26x") pod "e08dc69a-2a62-4fdd-878d-88468fec4ef0" (UID: "e08dc69a-2a62-4fdd-878d-88468fec4ef0"). InnerVolumeSpecName "kube-api-access-sl26x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.406478 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-inventory" (OuterVolumeSpecName: "inventory") pod "e08dc69a-2a62-4fdd-878d-88468fec4ef0" (UID: "e08dc69a-2a62-4fdd-878d-88468fec4ef0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.410462 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e08dc69a-2a62-4fdd-878d-88468fec4ef0" (UID: "e08dc69a-2a62-4fdd-878d-88468fec4ef0"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.412457 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e08dc69a-2a62-4fdd-878d-88468fec4ef0" (UID: "e08dc69a-2a62-4fdd-878d-88468fec4ef0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.413681 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e08dc69a-2a62-4fdd-878d-88468fec4ef0" (UID: "e08dc69a-2a62-4fdd-878d-88468fec4ef0"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.477232 5050 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.477273 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.477284 5050 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.477294 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.477305 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.477313 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl26x\" (UniqueName: \"kubernetes.io/projected/e08dc69a-2a62-4fdd-878d-88468fec4ef0-kube-api-access-sl26x\") on node \"crc\" DevicePath \"\"" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.477321 5050 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08dc69a-2a62-4fdd-878d-88468fec4ef0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.956618 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" event={"ID":"e08dc69a-2a62-4fdd-878d-88468fec4ef0","Type":"ContainerDied","Data":"fe91169fb13496b8c8a2cefe871e9e3d557a922cf328b3bac8de390ea1f993a6"} Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.956996 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe91169fb13496b8c8a2cefe871e9e3d557a922cf328b3bac8de390ea1f993a6" Jan 31 06:04:53 crc kubenswrapper[5050]: I0131 06:04:53.956928 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.040445 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5"] Jan 31 06:04:54 crc kubenswrapper[5050]: E0131 06:04:54.040804 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e08dc69a-2a62-4fdd-878d-88468fec4ef0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.040826 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e08dc69a-2a62-4fdd-878d-88468fec4ef0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.041052 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e08dc69a-2a62-4fdd-878d-88468fec4ef0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.041757 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.044255 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.044575 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.044610 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.044814 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.044867 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.045512 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.059492 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5"] Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.087767 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9bmq\" (UniqueName: \"kubernetes.io/projected/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-kube-api-access-l9bmq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.087810 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.087832 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.087873 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.087891 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.087917 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.189265 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9bmq\" (UniqueName: \"kubernetes.io/projected/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-kube-api-access-l9bmq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.189323 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.189347 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.189393 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.189415 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.189447 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.577670 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.578807 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9bmq\" (UniqueName: \"kubernetes.io/projected/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-kube-api-access-l9bmq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.581609 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.584907 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.586411 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.586803 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:54 crc kubenswrapper[5050]: I0131 06:04:54.867387 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:04:55 crc kubenswrapper[5050]: I0131 06:04:55.399896 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5"] Jan 31 06:04:55 crc kubenswrapper[5050]: I0131 06:04:55.976423 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" event={"ID":"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58","Type":"ContainerStarted","Data":"d03bedd71e6a5dc20603ec9d74b9e603aec6fba7c0f3876886a653670cd88471"} Jan 31 06:04:58 crc kubenswrapper[5050]: I0131 06:04:57.999519 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" event={"ID":"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58","Type":"ContainerStarted","Data":"877726a9da0c7052e879f25b7c1919fd159005d5db0a8d8cc5dbb93c48642339"} Jan 31 06:04:58 crc kubenswrapper[5050]: I0131 06:04:58.020980 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" podStartSLOduration=2.40226042 podStartE2EDuration="4.020940995s" podCreationTimestamp="2026-01-31 06:04:54 +0000 UTC" firstStartedPulling="2026-01-31 06:04:55.409575311 +0000 UTC m=+2620.458736907" lastFinishedPulling="2026-01-31 06:04:57.028255876 +0000 UTC m=+2622.077417482" observedRunningTime="2026-01-31 06:04:58.016467476 +0000 UTC m=+2623.065629112" watchObservedRunningTime="2026-01-31 06:04:58.020940995 +0000 UTC m=+2623.070102591" Jan 31 06:06:39 crc kubenswrapper[5050]: I0131 06:06:39.018235 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:06:39 crc kubenswrapper[5050]: I0131 06:06:39.019655 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:07:05 crc kubenswrapper[5050]: I0131 06:07:05.844609 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-shsn5"] Jan 31 06:07:05 crc kubenswrapper[5050]: I0131 06:07:05.846792 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:05 crc kubenswrapper[5050]: I0131 06:07:05.855402 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-shsn5"] Jan 31 06:07:06 crc kubenswrapper[5050]: I0131 06:07:06.024167 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d05b82-0de8-441e-a4fd-785f26ccba2a-utilities\") pod \"certified-operators-shsn5\" (UID: \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\") " pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:06 crc kubenswrapper[5050]: I0131 06:07:06.024258 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d05b82-0de8-441e-a4fd-785f26ccba2a-catalog-content\") pod \"certified-operators-shsn5\" (UID: \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\") " pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:06 crc kubenswrapper[5050]: I0131 06:07:06.024286 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pwfk\" (UniqueName: \"kubernetes.io/projected/e2d05b82-0de8-441e-a4fd-785f26ccba2a-kube-api-access-8pwfk\") pod \"certified-operators-shsn5\" (UID: \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\") " pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:06 crc kubenswrapper[5050]: I0131 06:07:06.125989 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d05b82-0de8-441e-a4fd-785f26ccba2a-catalog-content\") pod \"certified-operators-shsn5\" (UID: \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\") " pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:06 crc kubenswrapper[5050]: I0131 06:07:06.126039 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pwfk\" (UniqueName: \"kubernetes.io/projected/e2d05b82-0de8-441e-a4fd-785f26ccba2a-kube-api-access-8pwfk\") pod \"certified-operators-shsn5\" (UID: \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\") " pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:06 crc kubenswrapper[5050]: I0131 06:07:06.126151 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d05b82-0de8-441e-a4fd-785f26ccba2a-utilities\") pod \"certified-operators-shsn5\" (UID: \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\") " pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:06 crc kubenswrapper[5050]: I0131 06:07:06.126774 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d05b82-0de8-441e-a4fd-785f26ccba2a-utilities\") pod \"certified-operators-shsn5\" (UID: \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\") " pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:06 crc kubenswrapper[5050]: I0131 06:07:06.126832 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d05b82-0de8-441e-a4fd-785f26ccba2a-catalog-content\") pod \"certified-operators-shsn5\" (UID: \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\") " pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:06 crc kubenswrapper[5050]: I0131 06:07:06.145447 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pwfk\" (UniqueName: \"kubernetes.io/projected/e2d05b82-0de8-441e-a4fd-785f26ccba2a-kube-api-access-8pwfk\") pod \"certified-operators-shsn5\" (UID: \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\") " pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:06 crc kubenswrapper[5050]: I0131 06:07:06.227632 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:06 crc kubenswrapper[5050]: I0131 06:07:06.743179 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-shsn5"] Jan 31 06:07:06 crc kubenswrapper[5050]: I0131 06:07:06.770317 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shsn5" event={"ID":"e2d05b82-0de8-441e-a4fd-785f26ccba2a","Type":"ContainerStarted","Data":"28b507a27351d97cd479e1a828188cf25a63b5b1ee269f58c407272d1b06f430"} Jan 31 06:07:07 crc kubenswrapper[5050]: I0131 06:07:07.778733 5050 generic.go:334] "Generic (PLEG): container finished" podID="e2d05b82-0de8-441e-a4fd-785f26ccba2a" containerID="0f1a3ab402aafe050423ca9d31df5f7f6e67f2109ba0f7373dfd757694f6b5ee" exitCode=0 Jan 31 06:07:07 crc kubenswrapper[5050]: I0131 06:07:07.778841 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shsn5" event={"ID":"e2d05b82-0de8-441e-a4fd-785f26ccba2a","Type":"ContainerDied","Data":"0f1a3ab402aafe050423ca9d31df5f7f6e67f2109ba0f7373dfd757694f6b5ee"} Jan 31 06:07:07 crc kubenswrapper[5050]: I0131 06:07:07.781835 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 06:07:09 crc kubenswrapper[5050]: I0131 06:07:09.017897 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:07:09 crc kubenswrapper[5050]: I0131 06:07:09.018346 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:07:10 crc kubenswrapper[5050]: E0131 06:07:10.176501 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2d05b82_0de8_441e_a4fd_785f26ccba2a.slice/crio-conmon-41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2d05b82_0de8_441e_a4fd_785f26ccba2a.slice/crio-41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f.scope\": RecentStats: unable to find data in memory cache]" Jan 31 06:07:10 crc kubenswrapper[5050]: I0131 06:07:10.818688 5050 generic.go:334] "Generic (PLEG): container finished" podID="e2d05b82-0de8-441e-a4fd-785f26ccba2a" containerID="41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f" exitCode=0 Jan 31 06:07:10 crc kubenswrapper[5050]: I0131 06:07:10.818725 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shsn5" event={"ID":"e2d05b82-0de8-441e-a4fd-785f26ccba2a","Type":"ContainerDied","Data":"41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f"} Jan 31 06:07:11 crc kubenswrapper[5050]: I0131 06:07:11.828221 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shsn5" event={"ID":"e2d05b82-0de8-441e-a4fd-785f26ccba2a","Type":"ContainerStarted","Data":"9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0"} Jan 31 06:07:11 crc kubenswrapper[5050]: I0131 06:07:11.863618 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-shsn5" podStartSLOduration=3.23718536 podStartE2EDuration="6.863594987s" podCreationTimestamp="2026-01-31 06:07:05 +0000 UTC" firstStartedPulling="2026-01-31 06:07:07.78146007 +0000 UTC m=+2752.830621706" lastFinishedPulling="2026-01-31 06:07:11.407869737 +0000 UTC m=+2756.457031333" observedRunningTime="2026-01-31 06:07:11.854702196 +0000 UTC m=+2756.903863792" watchObservedRunningTime="2026-01-31 06:07:11.863594987 +0000 UTC m=+2756.912756583" Jan 31 06:07:16 crc kubenswrapper[5050]: I0131 06:07:16.228756 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:16 crc kubenswrapper[5050]: I0131 06:07:16.230127 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:16 crc kubenswrapper[5050]: I0131 06:07:16.322836 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:16 crc kubenswrapper[5050]: I0131 06:07:16.932382 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:16 crc kubenswrapper[5050]: I0131 06:07:16.990177 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-shsn5"] Jan 31 06:07:18 crc kubenswrapper[5050]: I0131 06:07:18.887383 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-shsn5" podUID="e2d05b82-0de8-441e-a4fd-785f26ccba2a" containerName="registry-server" containerID="cri-o://9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0" gracePeriod=2 Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.409177 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.485112 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pwfk\" (UniqueName: \"kubernetes.io/projected/e2d05b82-0de8-441e-a4fd-785f26ccba2a-kube-api-access-8pwfk\") pod \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\" (UID: \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\") " Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.485164 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d05b82-0de8-441e-a4fd-785f26ccba2a-utilities\") pod \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\" (UID: \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\") " Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.485326 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d05b82-0de8-441e-a4fd-785f26ccba2a-catalog-content\") pod \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\" (UID: \"e2d05b82-0de8-441e-a4fd-785f26ccba2a\") " Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.486999 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d05b82-0de8-441e-a4fd-785f26ccba2a-utilities" (OuterVolumeSpecName: "utilities") pod "e2d05b82-0de8-441e-a4fd-785f26ccba2a" (UID: "e2d05b82-0de8-441e-a4fd-785f26ccba2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.491879 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d05b82-0de8-441e-a4fd-785f26ccba2a-kube-api-access-8pwfk" (OuterVolumeSpecName: "kube-api-access-8pwfk") pod "e2d05b82-0de8-441e-a4fd-785f26ccba2a" (UID: "e2d05b82-0de8-441e-a4fd-785f26ccba2a"). InnerVolumeSpecName "kube-api-access-8pwfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.572106 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d05b82-0de8-441e-a4fd-785f26ccba2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2d05b82-0de8-441e-a4fd-785f26ccba2a" (UID: "e2d05b82-0de8-441e-a4fd-785f26ccba2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.586873 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d05b82-0de8-441e-a4fd-785f26ccba2a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.586910 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d05b82-0de8-441e-a4fd-785f26ccba2a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.586919 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pwfk\" (UniqueName: \"kubernetes.io/projected/e2d05b82-0de8-441e-a4fd-785f26ccba2a-kube-api-access-8pwfk\") on node \"crc\" DevicePath \"\"" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.897533 5050 generic.go:334] "Generic (PLEG): container finished" podID="e2d05b82-0de8-441e-a4fd-785f26ccba2a" containerID="9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0" exitCode=0 Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.897573 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shsn5" event={"ID":"e2d05b82-0de8-441e-a4fd-785f26ccba2a","Type":"ContainerDied","Data":"9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0"} Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.897894 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shsn5" event={"ID":"e2d05b82-0de8-441e-a4fd-785f26ccba2a","Type":"ContainerDied","Data":"28b507a27351d97cd479e1a828188cf25a63b5b1ee269f58c407272d1b06f430"} Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.897931 5050 scope.go:117] "RemoveContainer" containerID="9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.897588 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shsn5" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.922968 5050 scope.go:117] "RemoveContainer" containerID="41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.923123 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-shsn5"] Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.930736 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-shsn5"] Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.945256 5050 scope.go:117] "RemoveContainer" containerID="0f1a3ab402aafe050423ca9d31df5f7f6e67f2109ba0f7373dfd757694f6b5ee" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.990583 5050 scope.go:117] "RemoveContainer" containerID="9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0" Jan 31 06:07:19 crc kubenswrapper[5050]: E0131 06:07:19.991167 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0\": container with ID starting with 9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0 not found: ID does not exist" containerID="9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.991226 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0"} err="failed to get container status \"9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0\": rpc error: code = NotFound desc = could not find container \"9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0\": container with ID starting with 9f97945a089c0ded44d650d2f2eb5a9a0dc50596b159c310215f444d8b57a1b0 not found: ID does not exist" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.991248 5050 scope.go:117] "RemoveContainer" containerID="41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f" Jan 31 06:07:19 crc kubenswrapper[5050]: E0131 06:07:19.991946 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f\": container with ID starting with 41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f not found: ID does not exist" containerID="41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.992082 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f"} err="failed to get container status \"41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f\": rpc error: code = NotFound desc = could not find container \"41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f\": container with ID starting with 41ec9dcf47d86edd43e3d3a987dff6de1342d83851ceed1e96d720ba1d33678f not found: ID does not exist" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.992127 5050 scope.go:117] "RemoveContainer" containerID="0f1a3ab402aafe050423ca9d31df5f7f6e67f2109ba0f7373dfd757694f6b5ee" Jan 31 06:07:19 crc kubenswrapper[5050]: E0131 06:07:19.992563 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f1a3ab402aafe050423ca9d31df5f7f6e67f2109ba0f7373dfd757694f6b5ee\": container with ID starting with 0f1a3ab402aafe050423ca9d31df5f7f6e67f2109ba0f7373dfd757694f6b5ee not found: ID does not exist" containerID="0f1a3ab402aafe050423ca9d31df5f7f6e67f2109ba0f7373dfd757694f6b5ee" Jan 31 06:07:19 crc kubenswrapper[5050]: I0131 06:07:19.992620 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1a3ab402aafe050423ca9d31df5f7f6e67f2109ba0f7373dfd757694f6b5ee"} err="failed to get container status \"0f1a3ab402aafe050423ca9d31df5f7f6e67f2109ba0f7373dfd757694f6b5ee\": rpc error: code = NotFound desc = could not find container \"0f1a3ab402aafe050423ca9d31df5f7f6e67f2109ba0f7373dfd757694f6b5ee\": container with ID starting with 0f1a3ab402aafe050423ca9d31df5f7f6e67f2109ba0f7373dfd757694f6b5ee not found: ID does not exist" Jan 31 06:07:21 crc kubenswrapper[5050]: I0131 06:07:21.749697 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d05b82-0de8-441e-a4fd-785f26ccba2a" path="/var/lib/kubelet/pods/e2d05b82-0de8-441e-a4fd-785f26ccba2a/volumes" Jan 31 06:07:39 crc kubenswrapper[5050]: I0131 06:07:39.021500 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:07:39 crc kubenswrapper[5050]: I0131 06:07:39.022467 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:07:39 crc kubenswrapper[5050]: I0131 06:07:39.022557 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 06:07:39 crc kubenswrapper[5050]: I0131 06:07:39.023942 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12f46c93a11a7b4f1ba5bc425839337056f564468bfed7e5d291893705d36b8b"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 06:07:39 crc kubenswrapper[5050]: I0131 06:07:39.024059 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://12f46c93a11a7b4f1ba5bc425839337056f564468bfed7e5d291893705d36b8b" gracePeriod=600 Jan 31 06:07:40 crc kubenswrapper[5050]: I0131 06:07:40.088315 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="12f46c93a11a7b4f1ba5bc425839337056f564468bfed7e5d291893705d36b8b" exitCode=0 Jan 31 06:07:40 crc kubenswrapper[5050]: I0131 06:07:40.088944 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"12f46c93a11a7b4f1ba5bc425839337056f564468bfed7e5d291893705d36b8b"} Jan 31 06:07:40 crc kubenswrapper[5050]: I0131 06:07:40.088998 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400"} Jan 31 06:07:40 crc kubenswrapper[5050]: I0131 06:07:40.089021 5050 scope.go:117] "RemoveContainer" containerID="2478bd4b8a750cbc35b7c0554b0c0856c34de4083d4d64f61758143fe611b239" Jan 31 06:09:39 crc kubenswrapper[5050]: I0131 06:09:39.018686 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:09:39 crc kubenswrapper[5050]: I0131 06:09:39.019514 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:09:46 crc kubenswrapper[5050]: I0131 06:09:46.211899 5050 generic.go:334] "Generic (PLEG): container finished" podID="8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58" containerID="877726a9da0c7052e879f25b7c1919fd159005d5db0a8d8cc5dbb93c48642339" exitCode=0 Jan 31 06:09:46 crc kubenswrapper[5050]: I0131 06:09:46.212003 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" event={"ID":"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58","Type":"ContainerDied","Data":"877726a9da0c7052e879f25b7c1919fd159005d5db0a8d8cc5dbb93c48642339"} Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.673700 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.838572 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9bmq\" (UniqueName: \"kubernetes.io/projected/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-kube-api-access-l9bmq\") pod \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.838888 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-libvirt-combined-ca-bundle\") pod \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.838980 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-ssh-key-openstack-edpm-ipam\") pod \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.839210 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-libvirt-secret-0\") pod \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.839358 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-ceph\") pod \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.839417 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-inventory\") pod \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\" (UID: \"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58\") " Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.847350 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-kube-api-access-l9bmq" (OuterVolumeSpecName: "kube-api-access-l9bmq") pod "8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58" (UID: "8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58"). InnerVolumeSpecName "kube-api-access-l9bmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.848193 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-ceph" (OuterVolumeSpecName: "ceph") pod "8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58" (UID: "8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.848777 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58" (UID: "8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.868277 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-inventory" (OuterVolumeSpecName: "inventory") pod "8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58" (UID: "8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.875200 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58" (UID: "8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.882292 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58" (UID: "8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.942182 5050 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.942234 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.942253 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.942272 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9bmq\" (UniqueName: \"kubernetes.io/projected/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-kube-api-access-l9bmq\") on node \"crc\" DevicePath \"\"" Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.942294 5050 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:09:47 crc kubenswrapper[5050]: I0131 06:09:47.942311 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.241187 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" event={"ID":"8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58","Type":"ContainerDied","Data":"d03bedd71e6a5dc20603ec9d74b9e603aec6fba7c0f3876886a653670cd88471"} Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.241691 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d03bedd71e6a5dc20603ec9d74b9e603aec6fba7c0f3876886a653670cd88471" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.241364 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.326527 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw"] Jan 31 06:09:48 crc kubenswrapper[5050]: E0131 06:09:48.326870 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.326887 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 06:09:48 crc kubenswrapper[5050]: E0131 06:09:48.326907 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d05b82-0de8-441e-a4fd-785f26ccba2a" containerName="extract-utilities" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.326913 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d05b82-0de8-441e-a4fd-785f26ccba2a" containerName="extract-utilities" Jan 31 06:09:48 crc kubenswrapper[5050]: E0131 06:09:48.326937 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d05b82-0de8-441e-a4fd-785f26ccba2a" containerName="extract-content" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.326944 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d05b82-0de8-441e-a4fd-785f26ccba2a" containerName="extract-content" Jan 31 06:09:48 crc kubenswrapper[5050]: E0131 06:09:48.327305 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d05b82-0de8-441e-a4fd-785f26ccba2a" containerName="registry-server" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.327320 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d05b82-0de8-441e-a4fd-785f26ccba2a" containerName="registry-server" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.328121 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d05b82-0de8-441e-a4fd-785f26ccba2a" containerName="registry-server" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.328147 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.328683 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.341026 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.341070 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw"] Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.341364 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.341980 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.342009 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.342211 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.342298 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.344242 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.344254 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rkhpw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.344345 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.451898 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.451990 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.452024 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.452109 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.452139 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.452208 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-782s8\" (UniqueName: \"kubernetes.io/projected/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-kube-api-access-782s8\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.452261 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.452286 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.452337 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.452361 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.452401 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.554030 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.554111 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.554230 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.554269 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.554337 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.554434 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.554474 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.554522 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.554835 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.555145 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.555224 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-782s8\" (UniqueName: \"kubernetes.io/projected/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-kube-api-access-782s8\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.556051 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.556111 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.561351 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.561393 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.561403 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.563923 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.564122 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.564172 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.564016 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.564661 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.580894 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-782s8\" (UniqueName: \"kubernetes.io/projected/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-kube-api-access-782s8\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:48 crc kubenswrapper[5050]: I0131 06:09:48.646374 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:09:49 crc kubenswrapper[5050]: I0131 06:09:49.229741 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw"] Jan 31 06:09:49 crc kubenswrapper[5050]: I0131 06:09:49.252201 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" event={"ID":"b5ede333-cbdc-4c95-ac45-0ea62a8876f0","Type":"ContainerStarted","Data":"b29f3cb13fe3cfd6eca70da55f6a587b7c54a8ec94ff2a42fc1e46ccd4c86867"} Jan 31 06:09:53 crc kubenswrapper[5050]: I0131 06:09:53.287074 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" event={"ID":"b5ede333-cbdc-4c95-ac45-0ea62a8876f0","Type":"ContainerStarted","Data":"40821508a184cbaea52e06b08f2610889737f2b60a283728aa945e2c5ffd1a5a"} Jan 31 06:09:53 crc kubenswrapper[5050]: I0131 06:09:53.346384 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" podStartSLOduration=3.265055512 podStartE2EDuration="5.34636255s" podCreationTimestamp="2026-01-31 06:09:48 +0000 UTC" firstStartedPulling="2026-01-31 06:09:49.237281541 +0000 UTC m=+2914.286443137" lastFinishedPulling="2026-01-31 06:09:51.318588559 +0000 UTC m=+2916.367750175" observedRunningTime="2026-01-31 06:09:53.337519581 +0000 UTC m=+2918.386681167" watchObservedRunningTime="2026-01-31 06:09:53.34636255 +0000 UTC m=+2918.395524156" Jan 31 06:10:09 crc kubenswrapper[5050]: I0131 06:10:09.018584 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:10:09 crc kubenswrapper[5050]: I0131 06:10:09.019496 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:10:39 crc kubenswrapper[5050]: I0131 06:10:39.018816 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:10:39 crc kubenswrapper[5050]: I0131 06:10:39.019339 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:10:39 crc kubenswrapper[5050]: I0131 06:10:39.019393 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 06:10:39 crc kubenswrapper[5050]: I0131 06:10:39.020053 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 06:10:39 crc kubenswrapper[5050]: I0131 06:10:39.020097 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" gracePeriod=600 Jan 31 06:10:39 crc kubenswrapper[5050]: I0131 06:10:39.748934 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" exitCode=0 Jan 31 06:10:39 crc kubenswrapper[5050]: I0131 06:10:39.748967 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400"} Jan 31 06:10:39 crc kubenswrapper[5050]: I0131 06:10:39.749007 5050 scope.go:117] "RemoveContainer" containerID="12f46c93a11a7b4f1ba5bc425839337056f564468bfed7e5d291893705d36b8b" Jan 31 06:10:39 crc kubenswrapper[5050]: E0131 06:10:39.877857 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:10:40 crc kubenswrapper[5050]: I0131 06:10:40.760252 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:10:40 crc kubenswrapper[5050]: E0131 06:10:40.760684 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:10:55 crc kubenswrapper[5050]: I0131 06:10:55.742261 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:10:55 crc kubenswrapper[5050]: E0131 06:10:55.743292 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:10:58 crc kubenswrapper[5050]: I0131 06:10:58.694301 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4sgsq"] Jan 31 06:10:58 crc kubenswrapper[5050]: I0131 06:10:58.700182 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:10:58 crc kubenswrapper[5050]: I0131 06:10:58.711009 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sgsq"] Jan 31 06:10:58 crc kubenswrapper[5050]: I0131 06:10:58.810075 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e4b575-9925-4660-827a-92b634bd178d-utilities\") pod \"redhat-marketplace-4sgsq\" (UID: \"56e4b575-9925-4660-827a-92b634bd178d\") " pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:10:58 crc kubenswrapper[5050]: I0131 06:10:58.810387 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e4b575-9925-4660-827a-92b634bd178d-catalog-content\") pod \"redhat-marketplace-4sgsq\" (UID: \"56e4b575-9925-4660-827a-92b634bd178d\") " pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:10:58 crc kubenswrapper[5050]: I0131 06:10:58.810443 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82tq5\" (UniqueName: \"kubernetes.io/projected/56e4b575-9925-4660-827a-92b634bd178d-kube-api-access-82tq5\") pod \"redhat-marketplace-4sgsq\" (UID: \"56e4b575-9925-4660-827a-92b634bd178d\") " pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:10:58 crc kubenswrapper[5050]: I0131 06:10:58.912492 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e4b575-9925-4660-827a-92b634bd178d-utilities\") pod \"redhat-marketplace-4sgsq\" (UID: \"56e4b575-9925-4660-827a-92b634bd178d\") " pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:10:58 crc kubenswrapper[5050]: I0131 06:10:58.912691 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e4b575-9925-4660-827a-92b634bd178d-catalog-content\") pod \"redhat-marketplace-4sgsq\" (UID: \"56e4b575-9925-4660-827a-92b634bd178d\") " pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:10:58 crc kubenswrapper[5050]: I0131 06:10:58.912719 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82tq5\" (UniqueName: \"kubernetes.io/projected/56e4b575-9925-4660-827a-92b634bd178d-kube-api-access-82tq5\") pod \"redhat-marketplace-4sgsq\" (UID: \"56e4b575-9925-4660-827a-92b634bd178d\") " pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:10:58 crc kubenswrapper[5050]: I0131 06:10:58.913296 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e4b575-9925-4660-827a-92b634bd178d-catalog-content\") pod \"redhat-marketplace-4sgsq\" (UID: \"56e4b575-9925-4660-827a-92b634bd178d\") " pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:10:58 crc kubenswrapper[5050]: I0131 06:10:58.914011 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e4b575-9925-4660-827a-92b634bd178d-utilities\") pod \"redhat-marketplace-4sgsq\" (UID: \"56e4b575-9925-4660-827a-92b634bd178d\") " pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:10:58 crc kubenswrapper[5050]: I0131 06:10:58.937814 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82tq5\" (UniqueName: \"kubernetes.io/projected/56e4b575-9925-4660-827a-92b634bd178d-kube-api-access-82tq5\") pod \"redhat-marketplace-4sgsq\" (UID: \"56e4b575-9925-4660-827a-92b634bd178d\") " pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:10:59 crc kubenswrapper[5050]: I0131 06:10:59.033847 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:10:59 crc kubenswrapper[5050]: I0131 06:10:59.515800 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sgsq"] Jan 31 06:10:59 crc kubenswrapper[5050]: W0131 06:10:59.531471 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56e4b575_9925_4660_827a_92b634bd178d.slice/crio-535a12077c6f3f3c875fc1e33b235425caf3f38f5422dac06adf32d9ef15cd6d WatchSource:0}: Error finding container 535a12077c6f3f3c875fc1e33b235425caf3f38f5422dac06adf32d9ef15cd6d: Status 404 returned error can't find the container with id 535a12077c6f3f3c875fc1e33b235425caf3f38f5422dac06adf32d9ef15cd6d Jan 31 06:10:59 crc kubenswrapper[5050]: I0131 06:10:59.958548 5050 generic.go:334] "Generic (PLEG): container finished" podID="56e4b575-9925-4660-827a-92b634bd178d" containerID="d2269e3e886dc9da9716a3745d0d8d369b2754969ed1f105974c1b29a93c23ca" exitCode=0 Jan 31 06:10:59 crc kubenswrapper[5050]: I0131 06:10:59.958604 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sgsq" event={"ID":"56e4b575-9925-4660-827a-92b634bd178d","Type":"ContainerDied","Data":"d2269e3e886dc9da9716a3745d0d8d369b2754969ed1f105974c1b29a93c23ca"} Jan 31 06:10:59 crc kubenswrapper[5050]: I0131 06:10:59.958648 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sgsq" event={"ID":"56e4b575-9925-4660-827a-92b634bd178d","Type":"ContainerStarted","Data":"535a12077c6f3f3c875fc1e33b235425caf3f38f5422dac06adf32d9ef15cd6d"} Jan 31 06:11:01 crc kubenswrapper[5050]: I0131 06:11:01.979067 5050 generic.go:334] "Generic (PLEG): container finished" podID="56e4b575-9925-4660-827a-92b634bd178d" containerID="ee1152f99fd2d8ef05cd11d373f08d75af83ba68cfe6b71920fe496d89d8afc6" exitCode=0 Jan 31 06:11:01 crc kubenswrapper[5050]: I0131 06:11:01.979754 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sgsq" event={"ID":"56e4b575-9925-4660-827a-92b634bd178d","Type":"ContainerDied","Data":"ee1152f99fd2d8ef05cd11d373f08d75af83ba68cfe6b71920fe496d89d8afc6"} Jan 31 06:11:05 crc kubenswrapper[5050]: I0131 06:11:05.012080 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sgsq" event={"ID":"56e4b575-9925-4660-827a-92b634bd178d","Type":"ContainerStarted","Data":"cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e"} Jan 31 06:11:05 crc kubenswrapper[5050]: I0131 06:11:05.032189 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4sgsq" podStartSLOduration=3.015616825 podStartE2EDuration="7.032173742s" podCreationTimestamp="2026-01-31 06:10:58 +0000 UTC" firstStartedPulling="2026-01-31 06:10:59.963637325 +0000 UTC m=+2985.012798921" lastFinishedPulling="2026-01-31 06:11:03.980194212 +0000 UTC m=+2989.029355838" observedRunningTime="2026-01-31 06:11:05.029282914 +0000 UTC m=+2990.078444510" watchObservedRunningTime="2026-01-31 06:11:05.032173742 +0000 UTC m=+2990.081335338" Jan 31 06:11:07 crc kubenswrapper[5050]: I0131 06:11:07.736889 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:11:07 crc kubenswrapper[5050]: E0131 06:11:07.738293 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:11:09 crc kubenswrapper[5050]: I0131 06:11:09.034941 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:11:09 crc kubenswrapper[5050]: I0131 06:11:09.035016 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:11:09 crc kubenswrapper[5050]: I0131 06:11:09.105797 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:11:09 crc kubenswrapper[5050]: I0131 06:11:09.167476 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:11:09 crc kubenswrapper[5050]: I0131 06:11:09.343031 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sgsq"] Jan 31 06:11:11 crc kubenswrapper[5050]: I0131 06:11:11.062904 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4sgsq" podUID="56e4b575-9925-4660-827a-92b634bd178d" containerName="registry-server" containerID="cri-o://cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e" gracePeriod=2 Jan 31 06:11:11 crc kubenswrapper[5050]: I0131 06:11:11.540978 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:11:11 crc kubenswrapper[5050]: I0131 06:11:11.664771 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e4b575-9925-4660-827a-92b634bd178d-catalog-content\") pod \"56e4b575-9925-4660-827a-92b634bd178d\" (UID: \"56e4b575-9925-4660-827a-92b634bd178d\") " Jan 31 06:11:11 crc kubenswrapper[5050]: I0131 06:11:11.664858 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e4b575-9925-4660-827a-92b634bd178d-utilities\") pod \"56e4b575-9925-4660-827a-92b634bd178d\" (UID: \"56e4b575-9925-4660-827a-92b634bd178d\") " Jan 31 06:11:11 crc kubenswrapper[5050]: I0131 06:11:11.665084 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82tq5\" (UniqueName: \"kubernetes.io/projected/56e4b575-9925-4660-827a-92b634bd178d-kube-api-access-82tq5\") pod \"56e4b575-9925-4660-827a-92b634bd178d\" (UID: \"56e4b575-9925-4660-827a-92b634bd178d\") " Jan 31 06:11:11 crc kubenswrapper[5050]: I0131 06:11:11.665692 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e4b575-9925-4660-827a-92b634bd178d-utilities" (OuterVolumeSpecName: "utilities") pod "56e4b575-9925-4660-827a-92b634bd178d" (UID: "56e4b575-9925-4660-827a-92b634bd178d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:11:11 crc kubenswrapper[5050]: I0131 06:11:11.672122 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e4b575-9925-4660-827a-92b634bd178d-kube-api-access-82tq5" (OuterVolumeSpecName: "kube-api-access-82tq5") pod "56e4b575-9925-4660-827a-92b634bd178d" (UID: "56e4b575-9925-4660-827a-92b634bd178d"). InnerVolumeSpecName "kube-api-access-82tq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:11:11 crc kubenswrapper[5050]: I0131 06:11:11.692371 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e4b575-9925-4660-827a-92b634bd178d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56e4b575-9925-4660-827a-92b634bd178d" (UID: "56e4b575-9925-4660-827a-92b634bd178d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:11:11 crc kubenswrapper[5050]: I0131 06:11:11.767187 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e4b575-9925-4660-827a-92b634bd178d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:11:11 crc kubenswrapper[5050]: I0131 06:11:11.767212 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e4b575-9925-4660-827a-92b634bd178d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:11:11 crc kubenswrapper[5050]: I0131 06:11:11.767222 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82tq5\" (UniqueName: \"kubernetes.io/projected/56e4b575-9925-4660-827a-92b634bd178d-kube-api-access-82tq5\") on node \"crc\" DevicePath \"\"" Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.075833 5050 generic.go:334] "Generic (PLEG): container finished" podID="56e4b575-9925-4660-827a-92b634bd178d" containerID="cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e" exitCode=0 Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.075907 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sgsq" event={"ID":"56e4b575-9925-4660-827a-92b634bd178d","Type":"ContainerDied","Data":"cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e"} Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.076277 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sgsq" event={"ID":"56e4b575-9925-4660-827a-92b634bd178d","Type":"ContainerDied","Data":"535a12077c6f3f3c875fc1e33b235425caf3f38f5422dac06adf32d9ef15cd6d"} Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.076306 5050 scope.go:117] "RemoveContainer" containerID="cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e" Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.075990 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sgsq" Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.105326 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sgsq"] Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.112983 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sgsq"] Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.114314 5050 scope.go:117] "RemoveContainer" containerID="ee1152f99fd2d8ef05cd11d373f08d75af83ba68cfe6b71920fe496d89d8afc6" Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.136218 5050 scope.go:117] "RemoveContainer" containerID="d2269e3e886dc9da9716a3745d0d8d369b2754969ed1f105974c1b29a93c23ca" Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.194587 5050 scope.go:117] "RemoveContainer" containerID="cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e" Jan 31 06:11:12 crc kubenswrapper[5050]: E0131 06:11:12.195232 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e\": container with ID starting with cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e not found: ID does not exist" containerID="cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e" Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.195279 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e"} err="failed to get container status \"cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e\": rpc error: code = NotFound desc = could not find container \"cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e\": container with ID starting with cee61f3a212beff690ffd19d961816e0cd1d613a24ac17b03485c434ff7c936e not found: ID does not exist" Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.195316 5050 scope.go:117] "RemoveContainer" containerID="ee1152f99fd2d8ef05cd11d373f08d75af83ba68cfe6b71920fe496d89d8afc6" Jan 31 06:11:12 crc kubenswrapper[5050]: E0131 06:11:12.196210 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee1152f99fd2d8ef05cd11d373f08d75af83ba68cfe6b71920fe496d89d8afc6\": container with ID starting with ee1152f99fd2d8ef05cd11d373f08d75af83ba68cfe6b71920fe496d89d8afc6 not found: ID does not exist" containerID="ee1152f99fd2d8ef05cd11d373f08d75af83ba68cfe6b71920fe496d89d8afc6" Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.196295 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee1152f99fd2d8ef05cd11d373f08d75af83ba68cfe6b71920fe496d89d8afc6"} err="failed to get container status \"ee1152f99fd2d8ef05cd11d373f08d75af83ba68cfe6b71920fe496d89d8afc6\": rpc error: code = NotFound desc = could not find container \"ee1152f99fd2d8ef05cd11d373f08d75af83ba68cfe6b71920fe496d89d8afc6\": container with ID starting with ee1152f99fd2d8ef05cd11d373f08d75af83ba68cfe6b71920fe496d89d8afc6 not found: ID does not exist" Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.196321 5050 scope.go:117] "RemoveContainer" containerID="d2269e3e886dc9da9716a3745d0d8d369b2754969ed1f105974c1b29a93c23ca" Jan 31 06:11:12 crc kubenswrapper[5050]: E0131 06:11:12.196714 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2269e3e886dc9da9716a3745d0d8d369b2754969ed1f105974c1b29a93c23ca\": container with ID starting with d2269e3e886dc9da9716a3745d0d8d369b2754969ed1f105974c1b29a93c23ca not found: ID does not exist" containerID="d2269e3e886dc9da9716a3745d0d8d369b2754969ed1f105974c1b29a93c23ca" Jan 31 06:11:12 crc kubenswrapper[5050]: I0131 06:11:12.196741 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2269e3e886dc9da9716a3745d0d8d369b2754969ed1f105974c1b29a93c23ca"} err="failed to get container status \"d2269e3e886dc9da9716a3745d0d8d369b2754969ed1f105974c1b29a93c23ca\": rpc error: code = NotFound desc = could not find container \"d2269e3e886dc9da9716a3745d0d8d369b2754969ed1f105974c1b29a93c23ca\": container with ID starting with d2269e3e886dc9da9716a3745d0d8d369b2754969ed1f105974c1b29a93c23ca not found: ID does not exist" Jan 31 06:11:13 crc kubenswrapper[5050]: I0131 06:11:13.756197 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e4b575-9925-4660-827a-92b634bd178d" path="/var/lib/kubelet/pods/56e4b575-9925-4660-827a-92b634bd178d/volumes" Jan 31 06:11:19 crc kubenswrapper[5050]: I0131 06:11:19.736599 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:11:19 crc kubenswrapper[5050]: E0131 06:11:19.737642 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:11:30 crc kubenswrapper[5050]: I0131 06:11:30.737201 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:11:30 crc kubenswrapper[5050]: E0131 06:11:30.738225 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:11:43 crc kubenswrapper[5050]: I0131 06:11:43.736305 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:11:43 crc kubenswrapper[5050]: E0131 06:11:43.736996 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:11:57 crc kubenswrapper[5050]: I0131 06:11:57.737238 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:11:57 crc kubenswrapper[5050]: E0131 06:11:57.743250 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:12:09 crc kubenswrapper[5050]: I0131 06:12:09.738934 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:12:09 crc kubenswrapper[5050]: E0131 06:12:09.740501 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:12:20 crc kubenswrapper[5050]: I0131 06:12:20.736782 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:12:20 crc kubenswrapper[5050]: E0131 06:12:20.737492 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:12:31 crc kubenswrapper[5050]: I0131 06:12:31.737294 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:12:31 crc kubenswrapper[5050]: E0131 06:12:31.738086 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:12:41 crc kubenswrapper[5050]: I0131 06:12:41.851457 5050 generic.go:334] "Generic (PLEG): container finished" podID="b5ede333-cbdc-4c95-ac45-0ea62a8876f0" containerID="40821508a184cbaea52e06b08f2610889737f2b60a283728aa945e2c5ffd1a5a" exitCode=0 Jan 31 06:12:41 crc kubenswrapper[5050]: I0131 06:12:41.851681 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" event={"ID":"b5ede333-cbdc-4c95-ac45-0ea62a8876f0","Type":"ContainerDied","Data":"40821508a184cbaea52e06b08f2610889737f2b60a283728aa945e2c5ffd1a5a"} Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.235039 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.341181 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-extra-config-0\") pod \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.341246 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-782s8\" (UniqueName: \"kubernetes.io/projected/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-kube-api-access-782s8\") pod \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.341277 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ceph-nova-0\") pod \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.341314 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-inventory\") pod \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.341351 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-migration-ssh-key-0\") pod \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.341503 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-migration-ssh-key-1\") pod \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.341550 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-custom-ceph-combined-ca-bundle\") pod \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.341575 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-cell1-compute-config-0\") pod \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.341600 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ceph\") pod \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.341638 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-cell1-compute-config-1\") pod \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.341662 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ssh-key-openstack-edpm-ipam\") pod \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\" (UID: \"b5ede333-cbdc-4c95-ac45-0ea62a8876f0\") " Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.349887 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-kube-api-access-782s8" (OuterVolumeSpecName: "kube-api-access-782s8") pod "b5ede333-cbdc-4c95-ac45-0ea62a8876f0" (UID: "b5ede333-cbdc-4c95-ac45-0ea62a8876f0"). InnerVolumeSpecName "kube-api-access-782s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.352488 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ceph" (OuterVolumeSpecName: "ceph") pod "b5ede333-cbdc-4c95-ac45-0ea62a8876f0" (UID: "b5ede333-cbdc-4c95-ac45-0ea62a8876f0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.352608 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "b5ede333-cbdc-4c95-ac45-0ea62a8876f0" (UID: "b5ede333-cbdc-4c95-ac45-0ea62a8876f0"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.368647 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "b5ede333-cbdc-4c95-ac45-0ea62a8876f0" (UID: "b5ede333-cbdc-4c95-ac45-0ea62a8876f0"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.376224 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-inventory" (OuterVolumeSpecName: "inventory") pod "b5ede333-cbdc-4c95-ac45-0ea62a8876f0" (UID: "b5ede333-cbdc-4c95-ac45-0ea62a8876f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.377263 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b5ede333-cbdc-4c95-ac45-0ea62a8876f0" (UID: "b5ede333-cbdc-4c95-ac45-0ea62a8876f0"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.379586 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b5ede333-cbdc-4c95-ac45-0ea62a8876f0" (UID: "b5ede333-cbdc-4c95-ac45-0ea62a8876f0"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.381566 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b5ede333-cbdc-4c95-ac45-0ea62a8876f0" (UID: "b5ede333-cbdc-4c95-ac45-0ea62a8876f0"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.386484 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b5ede333-cbdc-4c95-ac45-0ea62a8876f0" (UID: "b5ede333-cbdc-4c95-ac45-0ea62a8876f0"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.401262 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b5ede333-cbdc-4c95-ac45-0ea62a8876f0" (UID: "b5ede333-cbdc-4c95-ac45-0ea62a8876f0"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.404169 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b5ede333-cbdc-4c95-ac45-0ea62a8876f0" (UID: "b5ede333-cbdc-4c95-ac45-0ea62a8876f0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.444514 5050 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.444553 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.444564 5050 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.444572 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-782s8\" (UniqueName: \"kubernetes.io/projected/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-kube-api-access-782s8\") on node \"crc\" DevicePath \"\"" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.444582 5050 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.444590 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.444597 5050 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.444605 5050 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.444613 5050 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.444621 5050 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.444630 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5ede333-cbdc-4c95-ac45-0ea62a8876f0-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.872713 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" event={"ID":"b5ede333-cbdc-4c95-ac45-0ea62a8876f0","Type":"ContainerDied","Data":"b29f3cb13fe3cfd6eca70da55f6a587b7c54a8ec94ff2a42fc1e46ccd4c86867"} Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.873075 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b29f3cb13fe3cfd6eca70da55f6a587b7c54a8ec94ff2a42fc1e46ccd4c86867" Jan 31 06:12:43 crc kubenswrapper[5050]: I0131 06:12:43.872807 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw" Jan 31 06:12:44 crc kubenswrapper[5050]: I0131 06:12:44.737409 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:12:44 crc kubenswrapper[5050]: E0131 06:12:44.739938 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.738785 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:12:59 crc kubenswrapper[5050]: E0131 06:12:59.739531 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.830011 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 31 06:12:59 crc kubenswrapper[5050]: E0131 06:12:59.830451 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e4b575-9925-4660-827a-92b634bd178d" containerName="extract-content" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.830476 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e4b575-9925-4660-827a-92b634bd178d" containerName="extract-content" Jan 31 06:12:59 crc kubenswrapper[5050]: E0131 06:12:59.830496 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ede333-cbdc-4c95-ac45-0ea62a8876f0" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.830505 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ede333-cbdc-4c95-ac45-0ea62a8876f0" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 31 06:12:59 crc kubenswrapper[5050]: E0131 06:12:59.830530 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e4b575-9925-4660-827a-92b634bd178d" containerName="extract-utilities" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.830539 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e4b575-9925-4660-827a-92b634bd178d" containerName="extract-utilities" Jan 31 06:12:59 crc kubenswrapper[5050]: E0131 06:12:59.830551 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e4b575-9925-4660-827a-92b634bd178d" containerName="registry-server" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.830561 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e4b575-9925-4660-827a-92b634bd178d" containerName="registry-server" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.830841 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e4b575-9925-4660-827a-92b634bd178d" containerName="registry-server" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.830886 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ede333-cbdc-4c95-ac45-0ea62a8876f0" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.832117 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.833720 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.833808 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.854647 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.886210 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.887986 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.892905 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.903066 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.979513 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.979566 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-run\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.979625 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-run\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.979650 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5h8j\" (UniqueName: \"kubernetes.io/projected/4914b8b7-fa26-4e58-85e1-c072305954cf-kube-api-access-p5h8j\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.979666 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.979683 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.979790 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4914b8b7-fa26-4e58-85e1-c072305954cf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.979840 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-sys\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.979877 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1115b898-f052-46bf-886a-489b12a35afb-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980016 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1115b898-f052-46bf-886a-489b12a35afb-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980087 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980153 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1115b898-f052-46bf-886a-489b12a35afb-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980175 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czb7v\" (UniqueName: \"kubernetes.io/projected/1115b898-f052-46bf-886a-489b12a35afb-kube-api-access-czb7v\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980195 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4914b8b7-fa26-4e58-85e1-c072305954cf-scripts\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980245 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980272 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1115b898-f052-46bf-886a-489b12a35afb-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980294 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980317 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1115b898-f052-46bf-886a-489b12a35afb-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980353 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980386 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980406 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980428 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980520 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4914b8b7-fa26-4e58-85e1-c072305954cf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980580 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-lib-modules\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980618 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-dev\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980646 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980690 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4914b8b7-fa26-4e58-85e1-c072305954cf-ceph\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980719 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980749 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980773 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980789 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4914b8b7-fa26-4e58-85e1-c072305954cf-config-data\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:12:59 crc kubenswrapper[5050]: I0131 06:12:59.980807 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082464 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1115b898-f052-46bf-886a-489b12a35afb-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082507 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czb7v\" (UniqueName: \"kubernetes.io/projected/1115b898-f052-46bf-886a-489b12a35afb-kube-api-access-czb7v\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082525 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4914b8b7-fa26-4e58-85e1-c072305954cf-scripts\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082544 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082562 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1115b898-f052-46bf-886a-489b12a35afb-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082578 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082595 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1115b898-f052-46bf-886a-489b12a35afb-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082613 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082628 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082649 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082670 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082698 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4914b8b7-fa26-4e58-85e1-c072305954cf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082708 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082742 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-lib-modules\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082720 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-lib-modules\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082772 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-dev\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082776 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082789 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082809 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4914b8b7-fa26-4e58-85e1-c072305954cf-ceph\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082823 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082850 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082867 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082881 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4914b8b7-fa26-4e58-85e1-c072305954cf-config-data\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082895 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082914 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082940 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-run\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082978 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-run\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.082994 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5h8j\" (UniqueName: \"kubernetes.io/projected/4914b8b7-fa26-4e58-85e1-c072305954cf-kube-api-access-p5h8j\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083009 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083014 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083025 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083054 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4914b8b7-fa26-4e58-85e1-c072305954cf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083085 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-sys\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083109 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1115b898-f052-46bf-886a-489b12a35afb-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083116 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083152 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1115b898-f052-46bf-886a-489b12a35afb-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083178 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083241 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083331 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083367 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.084726 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.084786 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.084814 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-run\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.084838 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-run\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.084908 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-dev\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.085005 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.085055 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.085081 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.085121 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.083055 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.085152 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1115b898-f052-46bf-886a-489b12a35afb-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.085181 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4914b8b7-fa26-4e58-85e1-c072305954cf-sys\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.090265 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4914b8b7-fa26-4e58-85e1-c072305954cf-scripts\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.096124 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1115b898-f052-46bf-886a-489b12a35afb-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.099011 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4914b8b7-fa26-4e58-85e1-c072305954cf-ceph\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.099026 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4914b8b7-fa26-4e58-85e1-c072305954cf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.099491 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1115b898-f052-46bf-886a-489b12a35afb-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.099556 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1115b898-f052-46bf-886a-489b12a35afb-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.100072 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1115b898-f052-46bf-886a-489b12a35afb-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.100305 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4914b8b7-fa26-4e58-85e1-c072305954cf-config-data\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.102209 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4914b8b7-fa26-4e58-85e1-c072305954cf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.105864 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1115b898-f052-46bf-886a-489b12a35afb-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.123697 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czb7v\" (UniqueName: \"kubernetes.io/projected/1115b898-f052-46bf-886a-489b12a35afb-kube-api-access-czb7v\") pod \"cinder-volume-volume1-0\" (UID: \"1115b898-f052-46bf-886a-489b12a35afb\") " pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.124630 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5h8j\" (UniqueName: \"kubernetes.io/projected/4914b8b7-fa26-4e58-85e1-c072305954cf-kube-api-access-p5h8j\") pod \"cinder-backup-0\" (UID: \"4914b8b7-fa26-4e58-85e1-c072305954cf\") " pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.159342 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.206546 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.257282 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-sbgrw"] Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.258409 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-sbgrw" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.275195 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-sbgrw"] Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.417806 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-de63-account-create-update-xrlkn"] Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.430400 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-de63-account-create-update-xrlkn" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.436940 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-869cd6f4d9-sfpnr"] Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.438624 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.442395 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqd6x\" (UniqueName: \"kubernetes.io/projected/28f0fb7d-6777-449f-a447-b4a4fb534df8-kube-api-access-nqd6x\") pod \"manila-db-create-sbgrw\" (UID: \"28f0fb7d-6777-449f-a447-b4a4fb534df8\") " pod="openstack/manila-db-create-sbgrw" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.442624 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28f0fb7d-6777-449f-a447-b4a4fb534df8-operator-scripts\") pod \"manila-db-create-sbgrw\" (UID: \"28f0fb7d-6777-449f-a447-b4a4fb534df8\") " pod="openstack/manila-db-create-sbgrw" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.446514 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.446635 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.446779 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.446875 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.446896 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xqcmq" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.450963 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-de63-account-create-update-xrlkn"] Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.468698 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-869cd6f4d9-sfpnr"] Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.521023 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.522452 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.525926 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.526002 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hmzgk" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.526125 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.526389 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.541064 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548355 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-config-data\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548429 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28f0fb7d-6777-449f-a447-b4a4fb534df8-operator-scripts\") pod \"manila-db-create-sbgrw\" (UID: \"28f0fb7d-6777-449f-a447-b4a4fb534df8\") " pod="openstack/manila-db-create-sbgrw" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548482 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa6e1af6-67b5-4266-857e-9f2031143f91-logs\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548513 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/53558c1e-c4b9-4da5-a6c0-00939b163ab3-ceph\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548554 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvwzl\" (UniqueName: \"kubernetes.io/projected/aa6e1af6-67b5-4266-857e-9f2031143f91-kube-api-access-mvwzl\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548601 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a786d1-99eb-4c32-98c6-876fb67fb320-operator-scripts\") pod \"manila-de63-account-create-update-xrlkn\" (UID: \"b7a786d1-99eb-4c32-98c6-876fb67fb320\") " pod="openstack/manila-de63-account-create-update-xrlkn" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548621 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53558c1e-c4b9-4da5-a6c0-00939b163ab3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548639 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53558c1e-c4b9-4da5-a6c0-00939b163ab3-logs\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548667 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548700 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa6e1af6-67b5-4266-857e-9f2031143f91-horizon-secret-key\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548721 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5tvw\" (UniqueName: \"kubernetes.io/projected/b7a786d1-99eb-4c32-98c6-876fb67fb320-kube-api-access-q5tvw\") pod \"manila-de63-account-create-update-xrlkn\" (UID: \"b7a786d1-99eb-4c32-98c6-876fb67fb320\") " pod="openstack/manila-de63-account-create-update-xrlkn" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548744 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa6e1af6-67b5-4266-857e-9f2031143f91-config-data\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548769 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqd6x\" (UniqueName: \"kubernetes.io/projected/28f0fb7d-6777-449f-a447-b4a4fb534df8-kube-api-access-nqd6x\") pod \"manila-db-create-sbgrw\" (UID: \"28f0fb7d-6777-449f-a447-b4a4fb534df8\") " pod="openstack/manila-db-create-sbgrw" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.548799 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.549517 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28f0fb7d-6777-449f-a447-b4a4fb534df8-operator-scripts\") pod \"manila-db-create-sbgrw\" (UID: \"28f0fb7d-6777-449f-a447-b4a4fb534df8\") " pod="openstack/manila-db-create-sbgrw" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.549920 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.549987 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa6e1af6-67b5-4266-857e-9f2031143f91-scripts\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.550043 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-scripts\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.550083 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5z8q\" (UniqueName: \"kubernetes.io/projected/53558c1e-c4b9-4da5-a6c0-00939b163ab3-kube-api-access-q5z8q\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.600245 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-85c5d7444f-42m7z"] Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.602670 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.606274 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqd6x\" (UniqueName: \"kubernetes.io/projected/28f0fb7d-6777-449f-a447-b4a4fb534df8-kube-api-access-nqd6x\") pod \"manila-db-create-sbgrw\" (UID: \"28f0fb7d-6777-449f-a447-b4a4fb534df8\") " pod="openstack/manila-db-create-sbgrw" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.643863 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-sbgrw" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.645057 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85c5d7444f-42m7z"] Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652347 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-config-data\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652409 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa6e1af6-67b5-4266-857e-9f2031143f91-logs\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652439 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/53558c1e-c4b9-4da5-a6c0-00939b163ab3-ceph\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652489 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvwzl\" (UniqueName: \"kubernetes.io/projected/aa6e1af6-67b5-4266-857e-9f2031143f91-kube-api-access-mvwzl\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652519 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a786d1-99eb-4c32-98c6-876fb67fb320-operator-scripts\") pod \"manila-de63-account-create-update-xrlkn\" (UID: \"b7a786d1-99eb-4c32-98c6-876fb67fb320\") " pod="openstack/manila-de63-account-create-update-xrlkn" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652537 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53558c1e-c4b9-4da5-a6c0-00939b163ab3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652553 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53558c1e-c4b9-4da5-a6c0-00939b163ab3-logs\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652575 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652602 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa6e1af6-67b5-4266-857e-9f2031143f91-horizon-secret-key\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652620 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5tvw\" (UniqueName: \"kubernetes.io/projected/b7a786d1-99eb-4c32-98c6-876fb67fb320-kube-api-access-q5tvw\") pod \"manila-de63-account-create-update-xrlkn\" (UID: \"b7a786d1-99eb-4c32-98c6-876fb67fb320\") " pod="openstack/manila-de63-account-create-update-xrlkn" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652640 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa6e1af6-67b5-4266-857e-9f2031143f91-config-data\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652668 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652687 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652705 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa6e1af6-67b5-4266-857e-9f2031143f91-scripts\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652726 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-scripts\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.652746 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5z8q\" (UniqueName: \"kubernetes.io/projected/53558c1e-c4b9-4da5-a6c0-00939b163ab3-kube-api-access-q5z8q\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.656271 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa6e1af6-67b5-4266-857e-9f2031143f91-logs\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.656538 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53558c1e-c4b9-4da5-a6c0-00939b163ab3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.656783 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.657587 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a786d1-99eb-4c32-98c6-876fb67fb320-operator-scripts\") pod \"manila-de63-account-create-update-xrlkn\" (UID: \"b7a786d1-99eb-4c32-98c6-876fb67fb320\") " pod="openstack/manila-de63-account-create-update-xrlkn" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.659052 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa6e1af6-67b5-4266-857e-9f2031143f91-config-data\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.661395 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa6e1af6-67b5-4266-857e-9f2031143f91-scripts\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.670011 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53558c1e-c4b9-4da5-a6c0-00939b163ab3-logs\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.676744 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa6e1af6-67b5-4266-857e-9f2031143f91-horizon-secret-key\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.677850 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-scripts\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.679746 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-config-data\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.692093 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.692523 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.694770 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.696613 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.699533 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/53558c1e-c4b9-4da5-a6c0-00939b163ab3-ceph\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.701796 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.702039 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.710872 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvwzl\" (UniqueName: \"kubernetes.io/projected/aa6e1af6-67b5-4266-857e-9f2031143f91-kube-api-access-mvwzl\") pod \"horizon-869cd6f4d9-sfpnr\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.724574 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5tvw\" (UniqueName: \"kubernetes.io/projected/b7a786d1-99eb-4c32-98c6-876fb67fb320-kube-api-access-q5tvw\") pod \"manila-de63-account-create-update-xrlkn\" (UID: \"b7a786d1-99eb-4c32-98c6-876fb67fb320\") " pod="openstack/manila-de63-account-create-update-xrlkn" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.729296 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5z8q\" (UniqueName: \"kubernetes.io/projected/53558c1e-c4b9-4da5-a6c0-00939b163ab3-kube-api-access-q5z8q\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.749558 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.756939 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1968bbde-0a5e-48e1-b234-6b59addb2bd8-config-data\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.757438 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v52m8\" (UniqueName: \"kubernetes.io/projected/1968bbde-0a5e-48e1-b234-6b59addb2bd8-kube-api-access-v52m8\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.757929 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1968bbde-0a5e-48e1-b234-6b59addb2bd8-scripts\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.758129 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1968bbde-0a5e-48e1-b234-6b59addb2bd8-horizon-secret-key\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.758303 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1968bbde-0a5e-48e1-b234-6b59addb2bd8-logs\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.759847 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.768550 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-de63-account-create-update-xrlkn" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.788792 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.857135 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.860844 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861083 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861110 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrgt\" (UniqueName: \"kubernetes.io/projected/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-kube-api-access-lcrgt\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861158 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1968bbde-0a5e-48e1-b234-6b59addb2bd8-config-data\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861189 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v52m8\" (UniqueName: \"kubernetes.io/projected/1968bbde-0a5e-48e1-b234-6b59addb2bd8-kube-api-access-v52m8\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861248 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861288 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1968bbde-0a5e-48e1-b234-6b59addb2bd8-scripts\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861342 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1968bbde-0a5e-48e1-b234-6b59addb2bd8-horizon-secret-key\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861385 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1968bbde-0a5e-48e1-b234-6b59addb2bd8-logs\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861406 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861462 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861482 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861507 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.861544 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.865312 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1968bbde-0a5e-48e1-b234-6b59addb2bd8-logs\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.865545 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1968bbde-0a5e-48e1-b234-6b59addb2bd8-scripts\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.867167 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1968bbde-0a5e-48e1-b234-6b59addb2bd8-config-data\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.868748 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1968bbde-0a5e-48e1-b234-6b59addb2bd8-horizon-secret-key\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.882799 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v52m8\" (UniqueName: \"kubernetes.io/projected/1968bbde-0a5e-48e1-b234-6b59addb2bd8-kube-api-access-v52m8\") pod \"horizon-85c5d7444f-42m7z\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.964300 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.964387 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.964436 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.964483 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.964534 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.964551 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcrgt\" (UniqueName: \"kubernetes.io/projected/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-kube-api-access-lcrgt\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.964599 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.964675 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.964725 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.965153 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.968047 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.968806 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.975797 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.976659 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.977873 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.979255 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.982242 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.984337 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:00 crc kubenswrapper[5050]: I0131 06:13:00.992566 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcrgt\" (UniqueName: \"kubernetes.io/projected/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-kube-api-access-lcrgt\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:01 crc kubenswrapper[5050]: I0131 06:13:01.006765 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:01 crc kubenswrapper[5050]: I0131 06:13:01.083161 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:01 crc kubenswrapper[5050]: I0131 06:13:01.198271 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 31 06:13:01 crc kubenswrapper[5050]: I0131 06:13:01.440567 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-de63-account-create-update-xrlkn"] Jan 31 06:13:01 crc kubenswrapper[5050]: I0131 06:13:01.479392 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-sbgrw"] Jan 31 06:13:01 crc kubenswrapper[5050]: I0131 06:13:01.584715 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-869cd6f4d9-sfpnr"] Jan 31 06:13:01 crc kubenswrapper[5050]: I0131 06:13:01.636577 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 06:13:01 crc kubenswrapper[5050]: I0131 06:13:01.669496 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 06:13:01 crc kubenswrapper[5050]: I0131 06:13:01.762327 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85c5d7444f-42m7z"] Jan 31 06:13:01 crc kubenswrapper[5050]: I0131 06:13:01.880230 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.040806 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53558c1e-c4b9-4da5-a6c0-00939b163ab3","Type":"ContainerStarted","Data":"4cf870b3956b35d815839f82a84c5a617cef13b2ea2632dde7dc1f8ce68279c1"} Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.043222 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-sbgrw" event={"ID":"28f0fb7d-6777-449f-a447-b4a4fb534df8","Type":"ContainerStarted","Data":"51b203ffc9b02cc0669809d1db7c2b6482e48cb16c21ba3c16ecb193cb3d278d"} Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.044703 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-de63-account-create-update-xrlkn" event={"ID":"b7a786d1-99eb-4c32-98c6-876fb67fb320","Type":"ContainerStarted","Data":"c05ef91e599f772cf7acfac4a9c66cc310f0ce53b943d69a5d0f839a0d9f65e2"} Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.045964 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-869cd6f4d9-sfpnr" event={"ID":"aa6e1af6-67b5-4266-857e-9f2031143f91","Type":"ContainerStarted","Data":"57e70b45e4bdee400f205b0c4654e5b730d71ca430b0a3e0ee31184a3fda43fb"} Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.047944 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4914b8b7-fa26-4e58-85e1-c072305954cf","Type":"ContainerStarted","Data":"0afa20ca1461491df16e175cbb34c22d9b823e456e749f3b608d1df98224b097"} Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.049411 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85c5d7444f-42m7z" event={"ID":"1968bbde-0a5e-48e1-b234-6b59addb2bd8","Type":"ContainerStarted","Data":"c07121a1780dc31e03c4fbd605f473a9d34b6a1f01c96d9499fd41510bb4e64e"} Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.050636 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1115b898-f052-46bf-886a-489b12a35afb","Type":"ContainerStarted","Data":"64133e4f5e6ad786191f21b0e64c2fe96f79f2fa82b74705ff1abb7ab6a57618"} Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.273636 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 06:13:02 crc kubenswrapper[5050]: W0131 06:13:02.286335 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd34fea0f_df99_4706_9e1d_b10d8bc6c37d.slice/crio-29dd887755b7e1cc5dc5fd3941e805c16063b81a2b9db5191b3c08a8dcc7e3c4 WatchSource:0}: Error finding container 29dd887755b7e1cc5dc5fd3941e805c16063b81a2b9db5191b3c08a8dcc7e3c4: Status 404 returned error can't find the container with id 29dd887755b7e1cc5dc5fd3941e805c16063b81a2b9db5191b3c08a8dcc7e3c4 Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.930087 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85c5d7444f-42m7z"] Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.961626 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-fff6c4f96-4xg9k"] Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.964189 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.969749 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 31 06:13:02 crc kubenswrapper[5050]: I0131 06:13:02.979161 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fff6c4f96-4xg9k"] Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.015577 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.049105 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-scripts\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.049629 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjw5\" (UniqueName: \"kubernetes.io/projected/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-kube-api-access-wrjw5\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.049799 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-config-data\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.049917 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-combined-ca-bundle\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.050104 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-horizon-tls-certs\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.050238 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-horizon-secret-key\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.050354 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-logs\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.076490 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-869cd6f4d9-sfpnr"] Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.100224 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d34fea0f-df99-4706-9e1d-b10d8bc6c37d","Type":"ContainerStarted","Data":"29dd887755b7e1cc5dc5fd3941e805c16063b81a2b9db5191b3c08a8dcc7e3c4"} Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.109908 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53558c1e-c4b9-4da5-a6c0-00939b163ab3","Type":"ContainerStarted","Data":"1d0c546d806d65dc846c22c4d95a1cb9e02ad0fffde972b1c1d969446b081913"} Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.136010 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-86b8468d8-lbt9b"] Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.137619 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.153183 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-scripts\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.153277 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjw5\" (UniqueName: \"kubernetes.io/projected/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-kube-api-access-wrjw5\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.153688 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-config-data\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.154067 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-scripts\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.154890 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-config-data\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.155003 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-combined-ca-bundle\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.155108 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-horizon-tls-certs\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.155175 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-horizon-secret-key\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.155199 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-logs\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.165725 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-logs\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.168207 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-sbgrw" event={"ID":"28f0fb7d-6777-449f-a447-b4a4fb534df8","Type":"ContainerStarted","Data":"67032e162857caf5cd47681d5b5744e52f380a039a3e24bbcd051f4af27c14dd"} Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.168588 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-de63-account-create-update-xrlkn" event={"ID":"b7a786d1-99eb-4c32-98c6-876fb67fb320","Type":"ContainerStarted","Data":"1e1a005c97de7c0519e244cc2103adae0fe19182e252e721c725525c0a1437d3"} Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.170144 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-combined-ca-bundle\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.172218 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-horizon-tls-certs\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.178191 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.182789 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjw5\" (UniqueName: \"kubernetes.io/projected/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-kube-api-access-wrjw5\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.185181 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-horizon-secret-key\") pod \"horizon-fff6c4f96-4xg9k\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.206689 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86b8468d8-lbt9b"] Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.231490 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-create-sbgrw" podStartSLOduration=3.231472391 podStartE2EDuration="3.231472391s" podCreationTimestamp="2026-01-31 06:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:13:03.191813788 +0000 UTC m=+3108.240975384" watchObservedRunningTime="2026-01-31 06:13:03.231472391 +0000 UTC m=+3108.280633997" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.256551 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab353c6-0ce1-463c-b17c-2346de6787db-combined-ca-bundle\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.256627 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5nqt\" (UniqueName: \"kubernetes.io/projected/5ab353c6-0ce1-463c-b17c-2346de6787db-kube-api-access-k5nqt\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.256693 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ab353c6-0ce1-463c-b17c-2346de6787db-config-data\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.256803 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ab353c6-0ce1-463c-b17c-2346de6787db-logs\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.256854 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ab353c6-0ce1-463c-b17c-2346de6787db-scripts\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.256914 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ab353c6-0ce1-463c-b17c-2346de6787db-horizon-secret-key\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.256927 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-de63-account-create-update-xrlkn" podStartSLOduration=3.256907209 podStartE2EDuration="3.256907209s" podCreationTimestamp="2026-01-31 06:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:13:03.211652664 +0000 UTC m=+3108.260814260" watchObservedRunningTime="2026-01-31 06:13:03.256907209 +0000 UTC m=+3108.306068805" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.256945 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab353c6-0ce1-463c-b17c-2346de6787db-horizon-tls-certs\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.292849 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.358588 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ab353c6-0ce1-463c-b17c-2346de6787db-logs\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.358673 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ab353c6-0ce1-463c-b17c-2346de6787db-scripts\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.358761 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ab353c6-0ce1-463c-b17c-2346de6787db-horizon-secret-key\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.358807 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab353c6-0ce1-463c-b17c-2346de6787db-horizon-tls-certs\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.358889 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab353c6-0ce1-463c-b17c-2346de6787db-combined-ca-bundle\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.358928 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5nqt\" (UniqueName: \"kubernetes.io/projected/5ab353c6-0ce1-463c-b17c-2346de6787db-kube-api-access-k5nqt\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.358983 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ab353c6-0ce1-463c-b17c-2346de6787db-config-data\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.361261 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ab353c6-0ce1-463c-b17c-2346de6787db-config-data\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.361765 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ab353c6-0ce1-463c-b17c-2346de6787db-logs\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.363031 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ab353c6-0ce1-463c-b17c-2346de6787db-scripts\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.365496 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ab353c6-0ce1-463c-b17c-2346de6787db-horizon-secret-key\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.368342 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab353c6-0ce1-463c-b17c-2346de6787db-horizon-tls-certs\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.370622 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab353c6-0ce1-463c-b17c-2346de6787db-combined-ca-bundle\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.380319 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5nqt\" (UniqueName: \"kubernetes.io/projected/5ab353c6-0ce1-463c-b17c-2346de6787db-kube-api-access-k5nqt\") pod \"horizon-86b8468d8-lbt9b\" (UID: \"5ab353c6-0ce1-463c-b17c-2346de6787db\") " pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.674895 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:03 crc kubenswrapper[5050]: I0131 06:13:03.879186 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fff6c4f96-4xg9k"] Jan 31 06:13:03 crc kubenswrapper[5050]: W0131 06:13:03.887370 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0f4bc0_6a5c_4b67_9e8f_95bc2caa19a7.slice/crio-a54c2ee22b0be72ae60ddb6e6f38f76fb1a6abf3c1da84ab973249df9d132da9 WatchSource:0}: Error finding container a54c2ee22b0be72ae60ddb6e6f38f76fb1a6abf3c1da84ab973249df9d132da9: Status 404 returned error can't find the container with id a54c2ee22b0be72ae60ddb6e6f38f76fb1a6abf3c1da84ab973249df9d132da9 Jan 31 06:13:04 crc kubenswrapper[5050]: I0131 06:13:04.179215 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d34fea0f-df99-4706-9e1d-b10d8bc6c37d","Type":"ContainerStarted","Data":"a096a7619cba41bd670914b88892f4bdd13c50668fa5fb9e3a671ef3a462377a"} Jan 31 06:13:04 crc kubenswrapper[5050]: I0131 06:13:04.184894 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fff6c4f96-4xg9k" event={"ID":"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7","Type":"ContainerStarted","Data":"a54c2ee22b0be72ae60ddb6e6f38f76fb1a6abf3c1da84ab973249df9d132da9"} Jan 31 06:13:04 crc kubenswrapper[5050]: I0131 06:13:04.191139 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53558c1e-c4b9-4da5-a6c0-00939b163ab3","Type":"ContainerStarted","Data":"6d57fc9fa94ac725b74cbefae973dd21e79af0ed29b7545692686d2a059c5a89"} Jan 31 06:13:04 crc kubenswrapper[5050]: I0131 06:13:04.191316 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="53558c1e-c4b9-4da5-a6c0-00939b163ab3" containerName="glance-log" containerID="cri-o://1d0c546d806d65dc846c22c4d95a1cb9e02ad0fffde972b1c1d969446b081913" gracePeriod=30 Jan 31 06:13:04 crc kubenswrapper[5050]: I0131 06:13:04.191840 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="53558c1e-c4b9-4da5-a6c0-00939b163ab3" containerName="glance-httpd" containerID="cri-o://6d57fc9fa94ac725b74cbefae973dd21e79af0ed29b7545692686d2a059c5a89" gracePeriod=30 Jan 31 06:13:04 crc kubenswrapper[5050]: I0131 06:13:04.202123 5050 generic.go:334] "Generic (PLEG): container finished" podID="28f0fb7d-6777-449f-a447-b4a4fb534df8" containerID="67032e162857caf5cd47681d5b5744e52f380a039a3e24bbcd051f4af27c14dd" exitCode=0 Jan 31 06:13:04 crc kubenswrapper[5050]: I0131 06:13:04.203380 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-sbgrw" event={"ID":"28f0fb7d-6777-449f-a447-b4a4fb534df8","Type":"ContainerDied","Data":"67032e162857caf5cd47681d5b5744e52f380a039a3e24bbcd051f4af27c14dd"} Jan 31 06:13:04 crc kubenswrapper[5050]: I0131 06:13:04.244411 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.244388955 podStartE2EDuration="4.244388955s" podCreationTimestamp="2026-01-31 06:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:13:04.232219836 +0000 UTC m=+3109.281381432" watchObservedRunningTime="2026-01-31 06:13:04.244388955 +0000 UTC m=+3109.293550551" Jan 31 06:13:04 crc kubenswrapper[5050]: I0131 06:13:04.312810 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86b8468d8-lbt9b"] Jan 31 06:13:04 crc kubenswrapper[5050]: W0131 06:13:04.772106 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ab353c6_0ce1_463c_b17c_2346de6787db.slice/crio-56bc7e8a61db5b9ad9b6d603d594a46873b42bf697faca79b9b804992d43fe61 WatchSource:0}: Error finding container 56bc7e8a61db5b9ad9b6d603d594a46873b42bf697faca79b9b804992d43fe61: Status 404 returned error can't find the container with id 56bc7e8a61db5b9ad9b6d603d594a46873b42bf697faca79b9b804992d43fe61 Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.216229 5050 generic.go:334] "Generic (PLEG): container finished" podID="53558c1e-c4b9-4da5-a6c0-00939b163ab3" containerID="6d57fc9fa94ac725b74cbefae973dd21e79af0ed29b7545692686d2a059c5a89" exitCode=143 Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.216260 5050 generic.go:334] "Generic (PLEG): container finished" podID="53558c1e-c4b9-4da5-a6c0-00939b163ab3" containerID="1d0c546d806d65dc846c22c4d95a1cb9e02ad0fffde972b1c1d969446b081913" exitCode=143 Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.216293 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53558c1e-c4b9-4da5-a6c0-00939b163ab3","Type":"ContainerDied","Data":"6d57fc9fa94ac725b74cbefae973dd21e79af0ed29b7545692686d2a059c5a89"} Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.216318 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53558c1e-c4b9-4da5-a6c0-00939b163ab3","Type":"ContainerDied","Data":"1d0c546d806d65dc846c22c4d95a1cb9e02ad0fffde972b1c1d969446b081913"} Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.220025 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d34fea0f-df99-4706-9e1d-b10d8bc6c37d","Type":"ContainerStarted","Data":"7bfe93e4ae91fc787a9760e75c205690e36c687dcfe796d7e309407ce998ce89"} Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.220085 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d34fea0f-df99-4706-9e1d-b10d8bc6c37d" containerName="glance-log" containerID="cri-o://a096a7619cba41bd670914b88892f4bdd13c50668fa5fb9e3a671ef3a462377a" gracePeriod=30 Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.220104 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d34fea0f-df99-4706-9e1d-b10d8bc6c37d" containerName="glance-httpd" containerID="cri-o://7bfe93e4ae91fc787a9760e75c205690e36c687dcfe796d7e309407ce998ce89" gracePeriod=30 Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.223583 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86b8468d8-lbt9b" event={"ID":"5ab353c6-0ce1-463c-b17c-2346de6787db","Type":"ContainerStarted","Data":"56bc7e8a61db5b9ad9b6d603d594a46873b42bf697faca79b9b804992d43fe61"} Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.254813 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.254789081 podStartE2EDuration="5.254789081s" podCreationTimestamp="2026-01-31 06:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:13:05.245822518 +0000 UTC m=+3110.294984124" watchObservedRunningTime="2026-01-31 06:13:05.254789081 +0000 UTC m=+3110.303950677" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.489446 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.542241 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-scripts\") pod \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.542303 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53558c1e-c4b9-4da5-a6c0-00939b163ab3-logs\") pod \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.542339 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53558c1e-c4b9-4da5-a6c0-00939b163ab3-httpd-run\") pod \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.542409 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-combined-ca-bundle\") pod \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.542461 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.542495 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5z8q\" (UniqueName: \"kubernetes.io/projected/53558c1e-c4b9-4da5-a6c0-00939b163ab3-kube-api-access-q5z8q\") pod \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.542592 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-public-tls-certs\") pod \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.542622 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-config-data\") pod \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.542701 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/53558c1e-c4b9-4da5-a6c0-00939b163ab3-ceph\") pod \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\" (UID: \"53558c1e-c4b9-4da5-a6c0-00939b163ab3\") " Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.548839 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53558c1e-c4b9-4da5-a6c0-00939b163ab3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "53558c1e-c4b9-4da5-a6c0-00939b163ab3" (UID: "53558c1e-c4b9-4da5-a6c0-00939b163ab3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.548864 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53558c1e-c4b9-4da5-a6c0-00939b163ab3-ceph" (OuterVolumeSpecName: "ceph") pod "53558c1e-c4b9-4da5-a6c0-00939b163ab3" (UID: "53558c1e-c4b9-4da5-a6c0-00939b163ab3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.548972 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53558c1e-c4b9-4da5-a6c0-00939b163ab3-kube-api-access-q5z8q" (OuterVolumeSpecName: "kube-api-access-q5z8q") pod "53558c1e-c4b9-4da5-a6c0-00939b163ab3" (UID: "53558c1e-c4b9-4da5-a6c0-00939b163ab3"). InnerVolumeSpecName "kube-api-access-q5z8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.549786 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53558c1e-c4b9-4da5-a6c0-00939b163ab3-logs" (OuterVolumeSpecName: "logs") pod "53558c1e-c4b9-4da5-a6c0-00939b163ab3" (UID: "53558c1e-c4b9-4da5-a6c0-00939b163ab3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.552334 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "53558c1e-c4b9-4da5-a6c0-00939b163ab3" (UID: "53558c1e-c4b9-4da5-a6c0-00939b163ab3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.554897 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-scripts" (OuterVolumeSpecName: "scripts") pod "53558c1e-c4b9-4da5-a6c0-00939b163ab3" (UID: "53558c1e-c4b9-4da5-a6c0-00939b163ab3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.595681 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53558c1e-c4b9-4da5-a6c0-00939b163ab3" (UID: "53558c1e-c4b9-4da5-a6c0-00939b163ab3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.606461 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "53558c1e-c4b9-4da5-a6c0-00939b163ab3" (UID: "53558c1e-c4b9-4da5-a6c0-00939b163ab3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.619536 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-config-data" (OuterVolumeSpecName: "config-data") pod "53558c1e-c4b9-4da5-a6c0-00939b163ab3" (UID: "53558c1e-c4b9-4da5-a6c0-00939b163ab3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.657682 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.657714 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5z8q\" (UniqueName: \"kubernetes.io/projected/53558c1e-c4b9-4da5-a6c0-00939b163ab3-kube-api-access-q5z8q\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.663008 5050 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.663052 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.663062 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/53558c1e-c4b9-4da5-a6c0-00939b163ab3-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.663070 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.663079 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53558c1e-c4b9-4da5-a6c0-00939b163ab3-logs\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.663089 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53558c1e-c4b9-4da5-a6c0-00939b163ab3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.663097 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53558c1e-c4b9-4da5-a6c0-00939b163ab3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.667606 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-sbgrw" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.680163 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.769295 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.894620 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqd6x\" (UniqueName: \"kubernetes.io/projected/28f0fb7d-6777-449f-a447-b4a4fb534df8-kube-api-access-nqd6x\") pod \"28f0fb7d-6777-449f-a447-b4a4fb534df8\" (UID: \"28f0fb7d-6777-449f-a447-b4a4fb534df8\") " Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.894768 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28f0fb7d-6777-449f-a447-b4a4fb534df8-operator-scripts\") pod \"28f0fb7d-6777-449f-a447-b4a4fb534df8\" (UID: \"28f0fb7d-6777-449f-a447-b4a4fb534df8\") " Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.899724 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28f0fb7d-6777-449f-a447-b4a4fb534df8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28f0fb7d-6777-449f-a447-b4a4fb534df8" (UID: "28f0fb7d-6777-449f-a447-b4a4fb534df8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:13:05 crc kubenswrapper[5050]: I0131 06:13:05.978225 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f0fb7d-6777-449f-a447-b4a4fb534df8-kube-api-access-nqd6x" (OuterVolumeSpecName: "kube-api-access-nqd6x") pod "28f0fb7d-6777-449f-a447-b4a4fb534df8" (UID: "28f0fb7d-6777-449f-a447-b4a4fb534df8"). InnerVolumeSpecName "kube-api-access-nqd6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:05.995979 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28f0fb7d-6777-449f-a447-b4a4fb534df8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:05.996051 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqd6x\" (UniqueName: \"kubernetes.io/projected/28f0fb7d-6777-449f-a447-b4a4fb534df8-kube-api-access-nqd6x\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.237792 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53558c1e-c4b9-4da5-a6c0-00939b163ab3","Type":"ContainerDied","Data":"4cf870b3956b35d815839f82a84c5a617cef13b2ea2632dde7dc1f8ce68279c1"} Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.238149 5050 scope.go:117] "RemoveContainer" containerID="6d57fc9fa94ac725b74cbefae973dd21e79af0ed29b7545692686d2a059c5a89" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.238328 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.243604 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-sbgrw" event={"ID":"28f0fb7d-6777-449f-a447-b4a4fb534df8","Type":"ContainerDied","Data":"51b203ffc9b02cc0669809d1db7c2b6482e48cb16c21ba3c16ecb193cb3d278d"} Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.243647 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51b203ffc9b02cc0669809d1db7c2b6482e48cb16c21ba3c16ecb193cb3d278d" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.243714 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-sbgrw" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.249692 5050 generic.go:334] "Generic (PLEG): container finished" podID="d34fea0f-df99-4706-9e1d-b10d8bc6c37d" containerID="7bfe93e4ae91fc787a9760e75c205690e36c687dcfe796d7e309407ce998ce89" exitCode=0 Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.249722 5050 generic.go:334] "Generic (PLEG): container finished" podID="d34fea0f-df99-4706-9e1d-b10d8bc6c37d" containerID="a096a7619cba41bd670914b88892f4bdd13c50668fa5fb9e3a671ef3a462377a" exitCode=143 Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.249743 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d34fea0f-df99-4706-9e1d-b10d8bc6c37d","Type":"ContainerDied","Data":"7bfe93e4ae91fc787a9760e75c205690e36c687dcfe796d7e309407ce998ce89"} Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.249767 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d34fea0f-df99-4706-9e1d-b10d8bc6c37d","Type":"ContainerDied","Data":"a096a7619cba41bd670914b88892f4bdd13c50668fa5fb9e3a671ef3a462377a"} Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.269288 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.277387 5050 scope.go:117] "RemoveContainer" containerID="1d0c546d806d65dc846c22c4d95a1cb9e02ad0fffde972b1c1d969446b081913" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.288712 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.310871 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 06:13:06 crc kubenswrapper[5050]: E0131 06:13:06.311231 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53558c1e-c4b9-4da5-a6c0-00939b163ab3" containerName="glance-log" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.311247 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="53558c1e-c4b9-4da5-a6c0-00939b163ab3" containerName="glance-log" Jan 31 06:13:06 crc kubenswrapper[5050]: E0131 06:13:06.311259 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53558c1e-c4b9-4da5-a6c0-00939b163ab3" containerName="glance-httpd" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.311265 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="53558c1e-c4b9-4da5-a6c0-00939b163ab3" containerName="glance-httpd" Jan 31 06:13:06 crc kubenswrapper[5050]: E0131 06:13:06.311276 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f0fb7d-6777-449f-a447-b4a4fb534df8" containerName="mariadb-database-create" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.311283 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f0fb7d-6777-449f-a447-b4a4fb534df8" containerName="mariadb-database-create" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.311469 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="53558c1e-c4b9-4da5-a6c0-00939b163ab3" containerName="glance-httpd" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.311527 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f0fb7d-6777-449f-a447-b4a4fb534df8" containerName="mariadb-database-create" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.311536 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="53558c1e-c4b9-4da5-a6c0-00939b163ab3" containerName="glance-log" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.312484 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.314505 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.314942 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.322240 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.504747 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa542e94-2400-4e6d-9576-687a18529d96-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.504803 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa542e94-2400-4e6d-9576-687a18529d96-ceph\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.504970 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tcth\" (UniqueName: \"kubernetes.io/projected/fa542e94-2400-4e6d-9576-687a18529d96-kube-api-access-2tcth\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.505083 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa542e94-2400-4e6d-9576-687a18529d96-logs\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.505197 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa542e94-2400-4e6d-9576-687a18529d96-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.505253 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa542e94-2400-4e6d-9576-687a18529d96-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.505293 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa542e94-2400-4e6d-9576-687a18529d96-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.505401 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa542e94-2400-4e6d-9576-687a18529d96-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.505695 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.607763 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa542e94-2400-4e6d-9576-687a18529d96-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.607852 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.607914 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa542e94-2400-4e6d-9576-687a18529d96-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.607982 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa542e94-2400-4e6d-9576-687a18529d96-ceph\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.608070 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tcth\" (UniqueName: \"kubernetes.io/projected/fa542e94-2400-4e6d-9576-687a18529d96-kube-api-access-2tcth\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.608118 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa542e94-2400-4e6d-9576-687a18529d96-logs\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.608170 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa542e94-2400-4e6d-9576-687a18529d96-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.608204 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa542e94-2400-4e6d-9576-687a18529d96-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.608230 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa542e94-2400-4e6d-9576-687a18529d96-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.608706 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.608983 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa542e94-2400-4e6d-9576-687a18529d96-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.609263 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa542e94-2400-4e6d-9576-687a18529d96-logs\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.615674 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa542e94-2400-4e6d-9576-687a18529d96-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.618572 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa542e94-2400-4e6d-9576-687a18529d96-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.618818 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa542e94-2400-4e6d-9576-687a18529d96-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.621823 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa542e94-2400-4e6d-9576-687a18529d96-ceph\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.622181 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa542e94-2400-4e6d-9576-687a18529d96-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.665140 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tcth\" (UniqueName: \"kubernetes.io/projected/fa542e94-2400-4e6d-9576-687a18529d96-kube-api-access-2tcth\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.667979 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"fa542e94-2400-4e6d-9576-687a18529d96\") " pod="openstack/glance-default-external-api-0" Jan 31 06:13:06 crc kubenswrapper[5050]: I0131 06:13:06.687413 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 06:13:07 crc kubenswrapper[5050]: I0131 06:13:07.260531 5050 generic.go:334] "Generic (PLEG): container finished" podID="b7a786d1-99eb-4c32-98c6-876fb67fb320" containerID="1e1a005c97de7c0519e244cc2103adae0fe19182e252e721c725525c0a1437d3" exitCode=0 Jan 31 06:13:07 crc kubenswrapper[5050]: I0131 06:13:07.260579 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-de63-account-create-update-xrlkn" event={"ID":"b7a786d1-99eb-4c32-98c6-876fb67fb320","Type":"ContainerDied","Data":"1e1a005c97de7c0519e244cc2103adae0fe19182e252e721c725525c0a1437d3"} Jan 31 06:13:07 crc kubenswrapper[5050]: I0131 06:13:07.787527 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53558c1e-c4b9-4da5-a6c0-00939b163ab3" path="/var/lib/kubelet/pods/53558c1e-c4b9-4da5-a6c0-00939b163ab3/volumes" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.200422 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.284773 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d34fea0f-df99-4706-9e1d-b10d8bc6c37d","Type":"ContainerDied","Data":"29dd887755b7e1cc5dc5fd3941e805c16063b81a2b9db5191b3c08a8dcc7e3c4"} Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.284832 5050 scope.go:117] "RemoveContainer" containerID="7bfe93e4ae91fc787a9760e75c205690e36c687dcfe796d7e309407ce998ce89" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.285129 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.366715 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-internal-tls-certs\") pod \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.367142 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-httpd-run\") pod \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.367442 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-combined-ca-bundle\") pod \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.367618 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-scripts\") pod \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.367781 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d34fea0f-df99-4706-9e1d-b10d8bc6c37d" (UID: "d34fea0f-df99-4706-9e1d-b10d8bc6c37d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.368171 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcrgt\" (UniqueName: \"kubernetes.io/projected/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-kube-api-access-lcrgt\") pod \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.368375 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-logs\") pod \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.368563 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-ceph\") pod \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.368793 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-config-data\") pod \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.368934 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\" (UID: \"d34fea0f-df99-4706-9e1d-b10d8bc6c37d\") " Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.369437 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-logs" (OuterVolumeSpecName: "logs") pod "d34fea0f-df99-4706-9e1d-b10d8bc6c37d" (UID: "d34fea0f-df99-4706-9e1d-b10d8bc6c37d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.370140 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-logs\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.370249 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.373269 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-kube-api-access-lcrgt" (OuterVolumeSpecName: "kube-api-access-lcrgt") pod "d34fea0f-df99-4706-9e1d-b10d8bc6c37d" (UID: "d34fea0f-df99-4706-9e1d-b10d8bc6c37d"). InnerVolumeSpecName "kube-api-access-lcrgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.373705 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-ceph" (OuterVolumeSpecName: "ceph") pod "d34fea0f-df99-4706-9e1d-b10d8bc6c37d" (UID: "d34fea0f-df99-4706-9e1d-b10d8bc6c37d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.373786 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "d34fea0f-df99-4706-9e1d-b10d8bc6c37d" (UID: "d34fea0f-df99-4706-9e1d-b10d8bc6c37d"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.376785 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-scripts" (OuterVolumeSpecName: "scripts") pod "d34fea0f-df99-4706-9e1d-b10d8bc6c37d" (UID: "d34fea0f-df99-4706-9e1d-b10d8bc6c37d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.411162 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d34fea0f-df99-4706-9e1d-b10d8bc6c37d" (UID: "d34fea0f-df99-4706-9e1d-b10d8bc6c37d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.431976 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d34fea0f-df99-4706-9e1d-b10d8bc6c37d" (UID: "d34fea0f-df99-4706-9e1d-b10d8bc6c37d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.433980 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-config-data" (OuterVolumeSpecName: "config-data") pod "d34fea0f-df99-4706-9e1d-b10d8bc6c37d" (UID: "d34fea0f-df99-4706-9e1d-b10d8bc6c37d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.472143 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.472178 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.472190 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcrgt\" (UniqueName: \"kubernetes.io/projected/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-kube-api-access-lcrgt\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.472204 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.472215 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.472258 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.472272 5050 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d34fea0f-df99-4706-9e1d-b10d8bc6c37d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.494324 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.573744 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.624099 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.638834 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.655509 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 06:13:09 crc kubenswrapper[5050]: E0131 06:13:09.656116 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34fea0f-df99-4706-9e1d-b10d8bc6c37d" containerName="glance-log" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.656139 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34fea0f-df99-4706-9e1d-b10d8bc6c37d" containerName="glance-log" Jan 31 06:13:09 crc kubenswrapper[5050]: E0131 06:13:09.656171 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34fea0f-df99-4706-9e1d-b10d8bc6c37d" containerName="glance-httpd" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.656182 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34fea0f-df99-4706-9e1d-b10d8bc6c37d" containerName="glance-httpd" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.656366 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34fea0f-df99-4706-9e1d-b10d8bc6c37d" containerName="glance-httpd" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.656400 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34fea0f-df99-4706-9e1d-b10d8bc6c37d" containerName="glance-log" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.657365 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.668315 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.668667 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.706912 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.748211 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d34fea0f-df99-4706-9e1d-b10d8bc6c37d" path="/var/lib/kubelet/pods/d34fea0f-df99-4706-9e1d-b10d8bc6c37d/volumes" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.778286 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-ceph\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.778450 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.778543 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.778714 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.778845 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.778901 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.779071 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v45wh\" (UniqueName: \"kubernetes.io/projected/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-kube-api-access-v45wh\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.779279 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.779362 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.881512 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v45wh\" (UniqueName: \"kubernetes.io/projected/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-kube-api-access-v45wh\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.881865 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.881895 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.881921 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-ceph\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.881969 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.881995 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.882032 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.882068 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.882084 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.882195 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.882773 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.883028 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.888175 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.889741 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.890247 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.890290 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-ceph\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.890356 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.899613 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45wh\" (UniqueName: \"kubernetes.io/projected/2dd4adbc-b40c-4d55-8f48-b98cefb276dc-kube-api-access-v45wh\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:09 crc kubenswrapper[5050]: I0131 06:13:09.907773 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2dd4adbc-b40c-4d55-8f48-b98cefb276dc\") " pod="openstack/glance-default-internal-api-0" Jan 31 06:13:10 crc kubenswrapper[5050]: I0131 06:13:10.003527 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:12 crc kubenswrapper[5050]: I0131 06:13:12.736758 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:13:12 crc kubenswrapper[5050]: E0131 06:13:12.737456 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:13:22 crc kubenswrapper[5050]: I0131 06:13:22.032655 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-de63-account-create-update-xrlkn" Jan 31 06:13:22 crc kubenswrapper[5050]: I0131 06:13:22.148472 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a786d1-99eb-4c32-98c6-876fb67fb320-operator-scripts\") pod \"b7a786d1-99eb-4c32-98c6-876fb67fb320\" (UID: \"b7a786d1-99eb-4c32-98c6-876fb67fb320\") " Jan 31 06:13:22 crc kubenswrapper[5050]: I0131 06:13:22.148623 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5tvw\" (UniqueName: \"kubernetes.io/projected/b7a786d1-99eb-4c32-98c6-876fb67fb320-kube-api-access-q5tvw\") pod \"b7a786d1-99eb-4c32-98c6-876fb67fb320\" (UID: \"b7a786d1-99eb-4c32-98c6-876fb67fb320\") " Jan 31 06:13:22 crc kubenswrapper[5050]: I0131 06:13:22.149494 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a786d1-99eb-4c32-98c6-876fb67fb320-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7a786d1-99eb-4c32-98c6-876fb67fb320" (UID: "b7a786d1-99eb-4c32-98c6-876fb67fb320"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:13:22 crc kubenswrapper[5050]: I0131 06:13:22.155209 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7a786d1-99eb-4c32-98c6-876fb67fb320-kube-api-access-q5tvw" (OuterVolumeSpecName: "kube-api-access-q5tvw") pod "b7a786d1-99eb-4c32-98c6-876fb67fb320" (UID: "b7a786d1-99eb-4c32-98c6-876fb67fb320"). InnerVolumeSpecName "kube-api-access-q5tvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:13:22 crc kubenswrapper[5050]: I0131 06:13:22.250857 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a786d1-99eb-4c32-98c6-876fb67fb320-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:22 crc kubenswrapper[5050]: I0131 06:13:22.250892 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5tvw\" (UniqueName: \"kubernetes.io/projected/b7a786d1-99eb-4c32-98c6-876fb67fb320-kube-api-access-q5tvw\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:22 crc kubenswrapper[5050]: I0131 06:13:22.410700 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-de63-account-create-update-xrlkn" event={"ID":"b7a786d1-99eb-4c32-98c6-876fb67fb320","Type":"ContainerDied","Data":"c05ef91e599f772cf7acfac4a9c66cc310f0ce53b943d69a5d0f839a0d9f65e2"} Jan 31 06:13:22 crc kubenswrapper[5050]: I0131 06:13:22.410755 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-de63-account-create-update-xrlkn" Jan 31 06:13:22 crc kubenswrapper[5050]: I0131 06:13:22.410764 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c05ef91e599f772cf7acfac4a9c66cc310f0ce53b943d69a5d0f839a0d9f65e2" Jan 31 06:13:23 crc kubenswrapper[5050]: I0131 06:13:23.736087 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:13:23 crc kubenswrapper[5050]: E0131 06:13:23.736677 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:13:24 crc kubenswrapper[5050]: I0131 06:13:24.011205 5050 scope.go:117] "RemoveContainer" containerID="a096a7619cba41bd670914b88892f4bdd13c50668fa5fb9e3a671ef3a462377a" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.656143 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-bcp7s"] Jan 31 06:13:26 crc kubenswrapper[5050]: E0131 06:13:26.662557 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a786d1-99eb-4c32-98c6-876fb67fb320" containerName="mariadb-account-create-update" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.662748 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a786d1-99eb-4c32-98c6-876fb67fb320" containerName="mariadb-account-create-update" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.663019 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7a786d1-99eb-4c32-98c6-876fb67fb320" containerName="mariadb-account-create-update" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.663728 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.676315 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-mjmd7" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.685681 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.686856 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-bcp7s"] Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.741527 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-config-data\") pod \"manila-db-sync-bcp7s\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.741602 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-combined-ca-bundle\") pod \"manila-db-sync-bcp7s\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.741672 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6flc\" (UniqueName: \"kubernetes.io/projected/adc7d8ad-779c-4340-b51c-01a232f106b8-kube-api-access-c6flc\") pod \"manila-db-sync-bcp7s\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.741698 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-job-config-data\") pod \"manila-db-sync-bcp7s\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.843241 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-config-data\") pod \"manila-db-sync-bcp7s\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.843345 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-combined-ca-bundle\") pod \"manila-db-sync-bcp7s\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.843414 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6flc\" (UniqueName: \"kubernetes.io/projected/adc7d8ad-779c-4340-b51c-01a232f106b8-kube-api-access-c6flc\") pod \"manila-db-sync-bcp7s\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.843439 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-job-config-data\") pod \"manila-db-sync-bcp7s\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.860565 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-job-config-data\") pod \"manila-db-sync-bcp7s\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.866080 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-config-data\") pod \"manila-db-sync-bcp7s\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.872543 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-combined-ca-bundle\") pod \"manila-db-sync-bcp7s\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.881625 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6flc\" (UniqueName: \"kubernetes.io/projected/adc7d8ad-779c-4340-b51c-01a232f106b8-kube-api-access-c6flc\") pod \"manila-db-sync-bcp7s\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:26 crc kubenswrapper[5050]: I0131 06:13:26.984966 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-bcp7s" Jan 31 06:13:31 crc kubenswrapper[5050]: E0131 06:13:31.705299 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 06:13:31 crc kubenswrapper[5050]: E0131 06:13:31.706052 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b8h86hdh87h655h569h66dhfdhd4h9ch697hf7h585h5fbh67fh87h5b7h66ch548h689h6hfdh575h558h559h55fhc5h668hd4h55ch69h648q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v52m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-85c5d7444f-42m7z_openstack(1968bbde-0a5e-48e1-b234-6b59addb2bd8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 06:13:31 crc kubenswrapper[5050]: E0131 06:13:31.711866 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 06:13:31 crc kubenswrapper[5050]: E0131 06:13:31.712017 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56fh588h657h57dh664h7dh676h699h68bhf8hdbh5ch9bh65h5bfh67ch78h684h68fh5d8h65h84h688h87hc9h5h79h65dh648h58fh589h56bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvwzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-869cd6f4d9-sfpnr_openstack(aa6e1af6-67b5-4266-857e-9f2031143f91): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 06:13:31 crc kubenswrapper[5050]: E0131 06:13:31.749839 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-85c5d7444f-42m7z" podUID="1968bbde-0a5e-48e1-b234-6b59addb2bd8" Jan 31 06:13:31 crc kubenswrapper[5050]: E0131 06:13:31.749930 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-869cd6f4d9-sfpnr" podUID="aa6e1af6-67b5-4266-857e-9f2031143f91" Jan 31 06:13:32 crc kubenswrapper[5050]: E0131 06:13:32.207351 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 06:13:32 crc kubenswrapper[5050]: E0131 06:13:32.208265 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nddhd9h577h696hcbh78h59bh697h67h66fh64h568h55fh667h58bh57fhc6h559hc6hfdh7dh54dh57fh7bh679h6h689h69h5fdhf4h5ch685q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5nqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-86b8468d8-lbt9b_openstack(5ab353c6-0ce1-463c-b17c-2346de6787db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 06:13:32 crc kubenswrapper[5050]: I0131 06:13:32.414157 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 06:13:32 crc kubenswrapper[5050]: W0131 06:13:32.424912 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa542e94_2400_4e6d_9576_687a18529d96.slice/crio-13446bab2c6d7460cb76eedc3b1c47b463813b01d1127c13706886605c925b6d WatchSource:0}: Error finding container 13446bab2c6d7460cb76eedc3b1c47b463813b01d1127c13706886605c925b6d: Status 404 returned error can't find the container with id 13446bab2c6d7460cb76eedc3b1c47b463813b01d1127c13706886605c925b6d Jan 31 06:13:32 crc kubenswrapper[5050]: I0131 06:13:32.514229 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 06:13:32 crc kubenswrapper[5050]: I0131 06:13:32.517132 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4914b8b7-fa26-4e58-85e1-c072305954cf","Type":"ContainerStarted","Data":"6739971cadee1e2532bfcec4b882cc55cf26fbdc50489c2390797118b7df847f"} Jan 31 06:13:32 crc kubenswrapper[5050]: I0131 06:13:32.519418 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1115b898-f052-46bf-886a-489b12a35afb","Type":"ContainerStarted","Data":"ad34304d8e8b48d6ec1f501f34b21e4c8af08fd245f839eace505fbfe66394b3"} Jan 31 06:13:32 crc kubenswrapper[5050]: I0131 06:13:32.523696 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa542e94-2400-4e6d-9576-687a18529d96","Type":"ContainerStarted","Data":"13446bab2c6d7460cb76eedc3b1c47b463813b01d1127c13706886605c925b6d"} Jan 31 06:13:32 crc kubenswrapper[5050]: W0131 06:13:32.528205 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dd4adbc_b40c_4d55_8f48_b98cefb276dc.slice/crio-07631bcb2fe0edd12e08e04609d116b0d5c4fad0d4d45e1058ecd747c1fc3905 WatchSource:0}: Error finding container 07631bcb2fe0edd12e08e04609d116b0d5c4fad0d4d45e1058ecd747c1fc3905: Status 404 returned error can't find the container with id 07631bcb2fe0edd12e08e04609d116b0d5c4fad0d4d45e1058ecd747c1fc3905 Jan 31 06:13:32 crc kubenswrapper[5050]: W0131 06:13:32.613720 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadc7d8ad_779c_4340_b51c_01a232f106b8.slice/crio-d1be62e7521c1377d654969d82bd5d78d82bdc5015e229f63f24c42f0ee98171 WatchSource:0}: Error finding container d1be62e7521c1377d654969d82bd5d78d82bdc5015e229f63f24c42f0ee98171: Status 404 returned error can't find the container with id d1be62e7521c1377d654969d82bd5d78d82bdc5015e229f63f24c42f0ee98171 Jan 31 06:13:32 crc kubenswrapper[5050]: I0131 06:13:32.616728 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-bcp7s"] Jan 31 06:13:32 crc kubenswrapper[5050]: E0131 06:13:32.794720 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizon-86b8468d8-lbt9b" podUID="5ab353c6-0ce1-463c-b17c-2346de6787db" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.000927 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.008177 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.071928 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1968bbde-0a5e-48e1-b234-6b59addb2bd8-config-data\") pod \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.072005 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa6e1af6-67b5-4266-857e-9f2031143f91-scripts\") pod \"aa6e1af6-67b5-4266-857e-9f2031143f91\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.072111 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa6e1af6-67b5-4266-857e-9f2031143f91-logs\") pod \"aa6e1af6-67b5-4266-857e-9f2031143f91\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.072206 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvwzl\" (UniqueName: \"kubernetes.io/projected/aa6e1af6-67b5-4266-857e-9f2031143f91-kube-api-access-mvwzl\") pod \"aa6e1af6-67b5-4266-857e-9f2031143f91\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.072257 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1968bbde-0a5e-48e1-b234-6b59addb2bd8-logs\") pod \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.072286 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa6e1af6-67b5-4266-857e-9f2031143f91-config-data\") pod \"aa6e1af6-67b5-4266-857e-9f2031143f91\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.072333 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1968bbde-0a5e-48e1-b234-6b59addb2bd8-scripts\") pod \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.072397 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v52m8\" (UniqueName: \"kubernetes.io/projected/1968bbde-0a5e-48e1-b234-6b59addb2bd8-kube-api-access-v52m8\") pod \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.072462 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1968bbde-0a5e-48e1-b234-6b59addb2bd8-horizon-secret-key\") pod \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\" (UID: \"1968bbde-0a5e-48e1-b234-6b59addb2bd8\") " Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.072506 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa6e1af6-67b5-4266-857e-9f2031143f91-horizon-secret-key\") pod \"aa6e1af6-67b5-4266-857e-9f2031143f91\" (UID: \"aa6e1af6-67b5-4266-857e-9f2031143f91\") " Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.074138 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1968bbde-0a5e-48e1-b234-6b59addb2bd8-logs" (OuterVolumeSpecName: "logs") pod "1968bbde-0a5e-48e1-b234-6b59addb2bd8" (UID: "1968bbde-0a5e-48e1-b234-6b59addb2bd8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.074499 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1968bbde-0a5e-48e1-b234-6b59addb2bd8-scripts" (OuterVolumeSpecName: "scripts") pod "1968bbde-0a5e-48e1-b234-6b59addb2bd8" (UID: "1968bbde-0a5e-48e1-b234-6b59addb2bd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.075036 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa6e1af6-67b5-4266-857e-9f2031143f91-logs" (OuterVolumeSpecName: "logs") pod "aa6e1af6-67b5-4266-857e-9f2031143f91" (UID: "aa6e1af6-67b5-4266-857e-9f2031143f91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.075586 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6e1af6-67b5-4266-857e-9f2031143f91-scripts" (OuterVolumeSpecName: "scripts") pod "aa6e1af6-67b5-4266-857e-9f2031143f91" (UID: "aa6e1af6-67b5-4266-857e-9f2031143f91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.075746 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6e1af6-67b5-4266-857e-9f2031143f91-config-data" (OuterVolumeSpecName: "config-data") pod "aa6e1af6-67b5-4266-857e-9f2031143f91" (UID: "aa6e1af6-67b5-4266-857e-9f2031143f91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.076046 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1968bbde-0a5e-48e1-b234-6b59addb2bd8-config-data" (OuterVolumeSpecName: "config-data") pod "1968bbde-0a5e-48e1-b234-6b59addb2bd8" (UID: "1968bbde-0a5e-48e1-b234-6b59addb2bd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.080110 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1968bbde-0a5e-48e1-b234-6b59addb2bd8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1968bbde-0a5e-48e1-b234-6b59addb2bd8" (UID: "1968bbde-0a5e-48e1-b234-6b59addb2bd8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.080849 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa6e1af6-67b5-4266-857e-9f2031143f91-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "aa6e1af6-67b5-4266-857e-9f2031143f91" (UID: "aa6e1af6-67b5-4266-857e-9f2031143f91"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.093320 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa6e1af6-67b5-4266-857e-9f2031143f91-kube-api-access-mvwzl" (OuterVolumeSpecName: "kube-api-access-mvwzl") pod "aa6e1af6-67b5-4266-857e-9f2031143f91" (UID: "aa6e1af6-67b5-4266-857e-9f2031143f91"). InnerVolumeSpecName "kube-api-access-mvwzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.093748 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1968bbde-0a5e-48e1-b234-6b59addb2bd8-kube-api-access-v52m8" (OuterVolumeSpecName: "kube-api-access-v52m8") pod "1968bbde-0a5e-48e1-b234-6b59addb2bd8" (UID: "1968bbde-0a5e-48e1-b234-6b59addb2bd8"). InnerVolumeSpecName "kube-api-access-v52m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.174993 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v52m8\" (UniqueName: \"kubernetes.io/projected/1968bbde-0a5e-48e1-b234-6b59addb2bd8-kube-api-access-v52m8\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.175031 5050 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1968bbde-0a5e-48e1-b234-6b59addb2bd8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.175043 5050 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa6e1af6-67b5-4266-857e-9f2031143f91-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.175056 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1968bbde-0a5e-48e1-b234-6b59addb2bd8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.175067 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa6e1af6-67b5-4266-857e-9f2031143f91-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.175077 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa6e1af6-67b5-4266-857e-9f2031143f91-logs\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.175088 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvwzl\" (UniqueName: \"kubernetes.io/projected/aa6e1af6-67b5-4266-857e-9f2031143f91-kube-api-access-mvwzl\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.175098 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1968bbde-0a5e-48e1-b234-6b59addb2bd8-logs\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.175107 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa6e1af6-67b5-4266-857e-9f2031143f91-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.175116 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1968bbde-0a5e-48e1-b234-6b59addb2bd8-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.539213 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86b8468d8-lbt9b" event={"ID":"5ab353c6-0ce1-463c-b17c-2346de6787db","Type":"ContainerStarted","Data":"481a6f0b5e734d5c53b61522cab1e21c15f97cda53f9ecb86d0f5da3f81d662a"} Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.549201 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-869cd6f4d9-sfpnr" event={"ID":"aa6e1af6-67b5-4266-857e-9f2031143f91","Type":"ContainerDied","Data":"57e70b45e4bdee400f205b0c4654e5b730d71ca430b0a3e0ee31184a3fda43fb"} Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.549256 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-869cd6f4d9-sfpnr" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.568296 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4914b8b7-fa26-4e58-85e1-c072305954cf","Type":"ContainerStarted","Data":"6e82328a8aa538df2eb5fdfa5f97f8784fa3248cf54da26d753b877607aec531"} Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.572483 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-bcp7s" event={"ID":"adc7d8ad-779c-4340-b51c-01a232f106b8","Type":"ContainerStarted","Data":"d1be62e7521c1377d654969d82bd5d78d82bdc5015e229f63f24c42f0ee98171"} Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.574201 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa542e94-2400-4e6d-9576-687a18529d96","Type":"ContainerStarted","Data":"4d6e6d8ba2cc82dbbc7eee35e7557f402723ab86618cfee904b5554ebd319174"} Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.577253 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fff6c4f96-4xg9k" event={"ID":"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7","Type":"ContainerStarted","Data":"ef049f0b8553aafae02db803e0d0eb40cbe68ef302223129804d7375983c787f"} Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.577305 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fff6c4f96-4xg9k" event={"ID":"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7","Type":"ContainerStarted","Data":"6b37bd809e6f2aa776a372012e3a63a7f5f0c59b3e363caa1146b2bc9ead8576"} Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.582374 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85c5d7444f-42m7z" event={"ID":"1968bbde-0a5e-48e1-b234-6b59addb2bd8","Type":"ContainerDied","Data":"c07121a1780dc31e03c4fbd605f473a9d34b6a1f01c96d9499fd41510bb4e64e"} Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.582402 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85c5d7444f-42m7z" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.586929 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1115b898-f052-46bf-886a-489b12a35afb","Type":"ContainerStarted","Data":"f66477a23d53212bdfcf39cc11f12bcb276d1d6de687a3bc8390495f3701257f"} Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.589471 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2dd4adbc-b40c-4d55-8f48-b98cefb276dc","Type":"ContainerStarted","Data":"f2a9eb1af4a227292fff64c28eed9918c49728ac557d094ad4c938511ad87663"} Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.589620 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2dd4adbc-b40c-4d55-8f48-b98cefb276dc","Type":"ContainerStarted","Data":"07631bcb2fe0edd12e08e04609d116b0d5c4fad0d4d45e1058ecd747c1fc3905"} Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.638621 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.749194662 podStartE2EDuration="34.636846655s" podCreationTimestamp="2026-01-31 06:12:59 +0000 UTC" firstStartedPulling="2026-01-31 06:13:01.881848281 +0000 UTC m=+3106.931009877" lastFinishedPulling="2026-01-31 06:13:31.769500284 +0000 UTC m=+3136.818661870" observedRunningTime="2026-01-31 06:13:33.608494458 +0000 UTC m=+3138.657656054" watchObservedRunningTime="2026-01-31 06:13:33.636846655 +0000 UTC m=+3138.686008241" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.641038 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=4.415132088 podStartE2EDuration="34.640834773s" podCreationTimestamp="2026-01-31 06:12:59 +0000 UTC" firstStartedPulling="2026-01-31 06:13:01.636336881 +0000 UTC m=+3106.685498477" lastFinishedPulling="2026-01-31 06:13:31.862039566 +0000 UTC m=+3136.911201162" observedRunningTime="2026-01-31 06:13:33.639041315 +0000 UTC m=+3138.688202911" watchObservedRunningTime="2026-01-31 06:13:33.640834773 +0000 UTC m=+3138.689996389" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.676519 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.676586 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.700968 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-869cd6f4d9-sfpnr"] Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.733024 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-869cd6f4d9-sfpnr"] Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.751834 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-fff6c4f96-4xg9k" podStartSLOduration=3.387384496 podStartE2EDuration="31.751812355s" podCreationTimestamp="2026-01-31 06:13:02 +0000 UTC" firstStartedPulling="2026-01-31 06:13:03.890585936 +0000 UTC m=+3108.939747532" lastFinishedPulling="2026-01-31 06:13:32.255013795 +0000 UTC m=+3137.304175391" observedRunningTime="2026-01-31 06:13:33.717030944 +0000 UTC m=+3138.766192550" watchObservedRunningTime="2026-01-31 06:13:33.751812355 +0000 UTC m=+3138.800973951" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.774920 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa6e1af6-67b5-4266-857e-9f2031143f91" path="/var/lib/kubelet/pods/aa6e1af6-67b5-4266-857e-9f2031143f91/volumes" Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.847735 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85c5d7444f-42m7z"] Jan 31 06:13:33 crc kubenswrapper[5050]: I0131 06:13:33.857647 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-85c5d7444f-42m7z"] Jan 31 06:13:34 crc kubenswrapper[5050]: I0131 06:13:34.604573 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2dd4adbc-b40c-4d55-8f48-b98cefb276dc","Type":"ContainerStarted","Data":"38939758ed1186cd928e211963020f782f8c3ee395a93c099bc6a8bc3d175fbc"} Jan 31 06:13:34 crc kubenswrapper[5050]: I0131 06:13:34.607783 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa542e94-2400-4e6d-9576-687a18529d96","Type":"ContainerStarted","Data":"b38b061320f2343b42b306404b35a8cfb0625c9e7fd999126f54b45fed091874"} Jan 31 06:13:34 crc kubenswrapper[5050]: I0131 06:13:34.614003 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86b8468d8-lbt9b" event={"ID":"5ab353c6-0ce1-463c-b17c-2346de6787db","Type":"ContainerStarted","Data":"bde77f0b7e6cc7a8e9b9947a1d712fc93684ceb43add9730d82b1a319d63f2e0"} Jan 31 06:13:34 crc kubenswrapper[5050]: I0131 06:13:34.632151 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=28.632130832 podStartE2EDuration="28.632130832s" podCreationTimestamp="2026-01-31 06:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:13:34.626054548 +0000 UTC m=+3139.675216144" watchObservedRunningTime="2026-01-31 06:13:34.632130832 +0000 UTC m=+3139.681292428" Jan 31 06:13:34 crc kubenswrapper[5050]: I0131 06:13:34.654035 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-86b8468d8-lbt9b" podStartSLOduration=-9223372005.20085 podStartE2EDuration="31.653925852s" podCreationTimestamp="2026-01-31 06:13:03 +0000 UTC" firstStartedPulling="2026-01-31 06:13:04.77996622 +0000 UTC m=+3109.829127816" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:13:34.651398984 +0000 UTC m=+3139.700560580" watchObservedRunningTime="2026-01-31 06:13:34.653925852 +0000 UTC m=+3139.703087448" Jan 31 06:13:35 crc kubenswrapper[5050]: I0131 06:13:35.160305 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:35 crc kubenswrapper[5050]: I0131 06:13:35.207012 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 31 06:13:35 crc kubenswrapper[5050]: I0131 06:13:35.665128 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.665106019 podStartE2EDuration="26.665106019s" podCreationTimestamp="2026-01-31 06:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:13:35.660053092 +0000 UTC m=+3140.709214708" watchObservedRunningTime="2026-01-31 06:13:35.665106019 +0000 UTC m=+3140.714267615" Jan 31 06:13:35 crc kubenswrapper[5050]: I0131 06:13:35.750492 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1968bbde-0a5e-48e1-b234-6b59addb2bd8" path="/var/lib/kubelet/pods/1968bbde-0a5e-48e1-b234-6b59addb2bd8/volumes" Jan 31 06:13:36 crc kubenswrapper[5050]: I0131 06:13:36.688036 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 06:13:36 crc kubenswrapper[5050]: I0131 06:13:36.689144 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 06:13:36 crc kubenswrapper[5050]: I0131 06:13:36.689183 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 06:13:36 crc kubenswrapper[5050]: I0131 06:13:36.689198 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 06:13:36 crc kubenswrapper[5050]: I0131 06:13:36.752976 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 06:13:36 crc kubenswrapper[5050]: I0131 06:13:36.753135 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 06:13:38 crc kubenswrapper[5050]: I0131 06:13:38.736653 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:13:38 crc kubenswrapper[5050]: E0131 06:13:38.737797 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:13:40 crc kubenswrapper[5050]: I0131 06:13:40.004334 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:40 crc kubenswrapper[5050]: I0131 06:13:40.005891 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:40 crc kubenswrapper[5050]: I0131 06:13:40.005907 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:40 crc kubenswrapper[5050]: I0131 06:13:40.005919 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:40 crc kubenswrapper[5050]: I0131 06:13:40.056393 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:40 crc kubenswrapper[5050]: I0131 06:13:40.056463 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:40 crc kubenswrapper[5050]: I0131 06:13:40.360113 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 31 06:13:40 crc kubenswrapper[5050]: I0131 06:13:40.457101 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 31 06:13:41 crc kubenswrapper[5050]: I0131 06:13:41.581281 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 06:13:43 crc kubenswrapper[5050]: I0131 06:13:43.293698 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:43 crc kubenswrapper[5050]: I0131 06:13:43.294227 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:13:43 crc kubenswrapper[5050]: I0131 06:13:43.295266 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fff6c4f96-4xg9k" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 31 06:13:43 crc kubenswrapper[5050]: I0131 06:13:43.639648 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 06:13:43 crc kubenswrapper[5050]: I0131 06:13:43.678484 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86b8468d8-lbt9b" podUID="5ab353c6-0ce1-463c-b17c-2346de6787db" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.245:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.245:8443: connect: connection refused" Jan 31 06:13:46 crc kubenswrapper[5050]: I0131 06:13:46.023680 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:46 crc kubenswrapper[5050]: I0131 06:13:46.059835 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 06:13:49 crc kubenswrapper[5050]: I0131 06:13:49.736334 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:13:49 crc kubenswrapper[5050]: E0131 06:13:49.737621 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:13:53 crc kubenswrapper[5050]: I0131 06:13:53.294164 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fff6c4f96-4xg9k" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 31 06:13:53 crc kubenswrapper[5050]: I0131 06:13:53.675568 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86b8468d8-lbt9b" podUID="5ab353c6-0ce1-463c-b17c-2346de6787db" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.245:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.245:8443: connect: connection refused" Jan 31 06:13:55 crc kubenswrapper[5050]: E0131 06:13:55.144704 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-manila-api:current-podified" Jan 31 06:13:55 crc kubenswrapper[5050]: E0131 06:13:55.145181 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manila-db-sync,Image:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,Command:[/bin/bash],Args:[-c sleep 0 && /usr/bin/manila-manage --config-dir /etc/manila/manila.conf.d db sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:job-config-data,ReadOnly:true,MountPath:/etc/manila/manila.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6flc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42429,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42429,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-db-sync-bcp7s_openstack(adc7d8ad-779c-4340-b51c-01a232f106b8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 06:13:55 crc kubenswrapper[5050]: E0131 06:13:55.147120 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/manila-db-sync-bcp7s" podUID="adc7d8ad-779c-4340-b51c-01a232f106b8" Jan 31 06:13:55 crc kubenswrapper[5050]: E0131 06:13:55.876602 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-manila-api:current-podified\\\"\"" pod="openstack/manila-db-sync-bcp7s" podUID="adc7d8ad-779c-4340-b51c-01a232f106b8" Jan 31 06:14:02 crc kubenswrapper[5050]: I0131 06:14:02.739977 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:14:02 crc kubenswrapper[5050]: E0131 06:14:02.741006 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:14:05 crc kubenswrapper[5050]: I0131 06:14:05.844329 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rmhcb"] Jan 31 06:14:05 crc kubenswrapper[5050]: I0131 06:14:05.847511 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:05 crc kubenswrapper[5050]: I0131 06:14:05.854587 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmhcb"] Jan 31 06:14:05 crc kubenswrapper[5050]: I0131 06:14:05.990698 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsjgz\" (UniqueName: \"kubernetes.io/projected/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-kube-api-access-vsjgz\") pod \"redhat-operators-rmhcb\" (UID: \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\") " pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:05 crc kubenswrapper[5050]: I0131 06:14:05.990896 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-utilities\") pod \"redhat-operators-rmhcb\" (UID: \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\") " pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:05 crc kubenswrapper[5050]: I0131 06:14:05.990986 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-catalog-content\") pod \"redhat-operators-rmhcb\" (UID: \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\") " pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:06 crc kubenswrapper[5050]: I0131 06:14:06.030784 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:14:06 crc kubenswrapper[5050]: I0131 06:14:06.053407 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:14:06 crc kubenswrapper[5050]: I0131 06:14:06.093327 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-utilities\") pod \"redhat-operators-rmhcb\" (UID: \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\") " pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:06 crc kubenswrapper[5050]: I0131 06:14:06.093439 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-catalog-content\") pod \"redhat-operators-rmhcb\" (UID: \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\") " pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:06 crc kubenswrapper[5050]: I0131 06:14:06.093501 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsjgz\" (UniqueName: \"kubernetes.io/projected/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-kube-api-access-vsjgz\") pod \"redhat-operators-rmhcb\" (UID: \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\") " pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:06 crc kubenswrapper[5050]: I0131 06:14:06.093870 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-utilities\") pod \"redhat-operators-rmhcb\" (UID: \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\") " pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:06 crc kubenswrapper[5050]: I0131 06:14:06.094004 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-catalog-content\") pod \"redhat-operators-rmhcb\" (UID: \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\") " pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:06 crc kubenswrapper[5050]: I0131 06:14:06.115066 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsjgz\" (UniqueName: \"kubernetes.io/projected/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-kube-api-access-vsjgz\") pod \"redhat-operators-rmhcb\" (UID: \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\") " pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:06 crc kubenswrapper[5050]: I0131 06:14:06.179502 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:06 crc kubenswrapper[5050]: I0131 06:14:06.665720 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmhcb"] Jan 31 06:14:06 crc kubenswrapper[5050]: I0131 06:14:06.981130 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmhcb" event={"ID":"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05","Type":"ContainerStarted","Data":"d422a932916c44326af8335c8ace934cf08635e5b1fbc5a5547f8a5de4af7969"} Jan 31 06:14:06 crc kubenswrapper[5050]: I0131 06:14:06.981480 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmhcb" event={"ID":"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05","Type":"ContainerStarted","Data":"351760e96dd7fe62b978ffd22fdca528fe3e852f45da428d4a311121619093fe"} Jan 31 06:14:07 crc kubenswrapper[5050]: I0131 06:14:07.968471 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:14:08 crc kubenswrapper[5050]: I0131 06:14:08.001660 5050 generic.go:334] "Generic (PLEG): container finished" podID="df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" containerID="d422a932916c44326af8335c8ace934cf08635e5b1fbc5a5547f8a5de4af7969" exitCode=0 Jan 31 06:14:08 crc kubenswrapper[5050]: I0131 06:14:08.001716 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmhcb" event={"ID":"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05","Type":"ContainerDied","Data":"d422a932916c44326af8335c8ace934cf08635e5b1fbc5a5547f8a5de4af7969"} Jan 31 06:14:08 crc kubenswrapper[5050]: I0131 06:14:08.272984 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-86b8468d8-lbt9b" Jan 31 06:14:08 crc kubenswrapper[5050]: I0131 06:14:08.345701 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-fff6c4f96-4xg9k"] Jan 31 06:14:08 crc kubenswrapper[5050]: I0131 06:14:08.345980 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fff6c4f96-4xg9k" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon-log" containerID="cri-o://6b37bd809e6f2aa776a372012e3a63a7f5f0c59b3e363caa1146b2bc9ead8576" gracePeriod=30 Jan 31 06:14:08 crc kubenswrapper[5050]: I0131 06:14:08.346066 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fff6c4f96-4xg9k" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon" containerID="cri-o://ef049f0b8553aafae02db803e0d0eb40cbe68ef302223129804d7375983c787f" gracePeriod=30 Jan 31 06:14:12 crc kubenswrapper[5050]: I0131 06:14:12.042564 5050 generic.go:334] "Generic (PLEG): container finished" podID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerID="ef049f0b8553aafae02db803e0d0eb40cbe68ef302223129804d7375983c787f" exitCode=0 Jan 31 06:14:12 crc kubenswrapper[5050]: I0131 06:14:12.043290 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fff6c4f96-4xg9k" event={"ID":"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7","Type":"ContainerDied","Data":"ef049f0b8553aafae02db803e0d0eb40cbe68ef302223129804d7375983c787f"} Jan 31 06:14:13 crc kubenswrapper[5050]: I0131 06:14:13.294841 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fff6c4f96-4xg9k" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 31 06:14:17 crc kubenswrapper[5050]: I0131 06:14:17.736423 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:14:17 crc kubenswrapper[5050]: E0131 06:14:17.737505 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:14:21 crc kubenswrapper[5050]: I0131 06:14:21.141107 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmhcb" event={"ID":"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05","Type":"ContainerStarted","Data":"0ee6eb3a7828b82441aceb7f09b2c1af87c411aab7f86e726ae6f4005134aa84"} Jan 31 06:14:22 crc kubenswrapper[5050]: I0131 06:14:22.157401 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-bcp7s" event={"ID":"adc7d8ad-779c-4340-b51c-01a232f106b8","Type":"ContainerStarted","Data":"739610c0af77343b6d04ee89e1c7717b5105ab129c905b9680d604973762e52b"} Jan 31 06:14:22 crc kubenswrapper[5050]: I0131 06:14:22.161919 5050 generic.go:334] "Generic (PLEG): container finished" podID="df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" containerID="0ee6eb3a7828b82441aceb7f09b2c1af87c411aab7f86e726ae6f4005134aa84" exitCode=0 Jan 31 06:14:22 crc kubenswrapper[5050]: I0131 06:14:22.161988 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmhcb" event={"ID":"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05","Type":"ContainerDied","Data":"0ee6eb3a7828b82441aceb7f09b2c1af87c411aab7f86e726ae6f4005134aa84"} Jan 31 06:14:23 crc kubenswrapper[5050]: I0131 06:14:23.294472 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fff6c4f96-4xg9k" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 31 06:14:24 crc kubenswrapper[5050]: I0131 06:14:24.201299 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-bcp7s" podStartSLOduration=10.653108399 podStartE2EDuration="58.201277577s" podCreationTimestamp="2026-01-31 06:13:26 +0000 UTC" firstStartedPulling="2026-01-31 06:13:32.618433363 +0000 UTC m=+3137.667594959" lastFinishedPulling="2026-01-31 06:14:20.166602531 +0000 UTC m=+3185.215764137" observedRunningTime="2026-01-31 06:14:24.196314344 +0000 UTC m=+3189.245476010" watchObservedRunningTime="2026-01-31 06:14:24.201277577 +0000 UTC m=+3189.250439183" Jan 31 06:14:28 crc kubenswrapper[5050]: I0131 06:14:28.228509 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmhcb" event={"ID":"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05","Type":"ContainerStarted","Data":"b0128be30ff648b4dc89af75d4c03efea376a06912b7fb4e91159608db854d5b"} Jan 31 06:14:28 crc kubenswrapper[5050]: I0131 06:14:28.256172 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rmhcb" podStartSLOduration=3.601514865 podStartE2EDuration="23.256154041s" podCreationTimestamp="2026-01-31 06:14:05 +0000 UTC" firstStartedPulling="2026-01-31 06:14:08.003392619 +0000 UTC m=+3173.052554215" lastFinishedPulling="2026-01-31 06:14:27.658031795 +0000 UTC m=+3192.707193391" observedRunningTime="2026-01-31 06:14:28.247603759 +0000 UTC m=+3193.296765365" watchObservedRunningTime="2026-01-31 06:14:28.256154041 +0000 UTC m=+3193.305315637" Jan 31 06:14:28 crc kubenswrapper[5050]: I0131 06:14:28.737992 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:14:28 crc kubenswrapper[5050]: E0131 06:14:28.738247 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:14:33 crc kubenswrapper[5050]: I0131 06:14:33.294478 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fff6c4f96-4xg9k" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 31 06:14:33 crc kubenswrapper[5050]: I0131 06:14:33.295089 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:14:36 crc kubenswrapper[5050]: I0131 06:14:36.180766 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:36 crc kubenswrapper[5050]: I0131 06:14:36.181577 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:36 crc kubenswrapper[5050]: I0131 06:14:36.244630 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:36 crc kubenswrapper[5050]: I0131 06:14:36.364844 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:37 crc kubenswrapper[5050]: I0131 06:14:37.042458 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmhcb"] Jan 31 06:14:38 crc kubenswrapper[5050]: I0131 06:14:38.325074 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rmhcb" podUID="df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" containerName="registry-server" containerID="cri-o://b0128be30ff648b4dc89af75d4c03efea376a06912b7fb4e91159608db854d5b" gracePeriod=2 Jan 31 06:14:39 crc kubenswrapper[5050]: I0131 06:14:39.278388 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="7b9ed42c-b571-4eec-b45d-802eaa8cf8b7" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.156:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:14:41 crc kubenswrapper[5050]: I0131 06:14:41.203254 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="1115b898-f052-46bf-886a-489b12a35afb" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.236:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:14:41 crc kubenswrapper[5050]: I0131 06:14:41.249162 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="4914b8b7-fa26-4e58-85e1-c072305954cf" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.237:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:14:41 crc kubenswrapper[5050]: I0131 06:14:41.358374 5050 generic.go:334] "Generic (PLEG): container finished" podID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerID="6b37bd809e6f2aa776a372012e3a63a7f5f0c59b3e363caa1146b2bc9ead8576" exitCode=137 Jan 31 06:14:41 crc kubenswrapper[5050]: I0131 06:14:41.358435 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fff6c4f96-4xg9k" event={"ID":"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7","Type":"ContainerDied","Data":"6b37bd809e6f2aa776a372012e3a63a7f5f0c59b3e363caa1146b2bc9ead8576"} Jan 31 06:14:42 crc kubenswrapper[5050]: I0131 06:14:42.737001 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:14:42 crc kubenswrapper[5050]: E0131 06:14:42.737825 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:14:43 crc kubenswrapper[5050]: I0131 06:14:43.294455 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fff6c4f96-4xg9k" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 31 06:14:43 crc kubenswrapper[5050]: I0131 06:14:43.382173 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmhcb_df6fe3ea-2b3d-426d-b292-f24b6e9f3f05/registry-server/0.log" Jan 31 06:14:43 crc kubenswrapper[5050]: I0131 06:14:43.383014 5050 generic.go:334] "Generic (PLEG): container finished" podID="df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" containerID="b0128be30ff648b4dc89af75d4c03efea376a06912b7fb4e91159608db854d5b" exitCode=137 Jan 31 06:14:43 crc kubenswrapper[5050]: I0131 06:14:43.383061 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmhcb" event={"ID":"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05","Type":"ContainerDied","Data":"b0128be30ff648b4dc89af75d4c03efea376a06912b7fb4e91159608db854d5b"} Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.254307 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.321227 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="7b9ed42c-b571-4eec-b45d-802eaa8cf8b7" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.156:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.385546 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-config-data\") pod \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.385633 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-horizon-secret-key\") pod \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.385680 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-combined-ca-bundle\") pod \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.385737 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-horizon-tls-certs\") pod \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.385839 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-scripts\") pod \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.385893 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-logs\") pod \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.386052 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrjw5\" (UniqueName: \"kubernetes.io/projected/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-kube-api-access-wrjw5\") pod \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\" (UID: \"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7\") " Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.387216 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-logs" (OuterVolumeSpecName: "logs") pod "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" (UID: "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.388623 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-logs\") on node \"crc\" DevicePath \"\"" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.392458 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" (UID: "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.393599 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-kube-api-access-wrjw5" (OuterVolumeSpecName: "kube-api-access-wrjw5") pod "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" (UID: "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7"). InnerVolumeSpecName "kube-api-access-wrjw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.397614 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fff6c4f96-4xg9k" event={"ID":"bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7","Type":"ContainerDied","Data":"a54c2ee22b0be72ae60ddb6e6f38f76fb1a6abf3c1da84ab973249df9d132da9"} Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.397675 5050 scope.go:117] "RemoveContainer" containerID="ef049f0b8553aafae02db803e0d0eb40cbe68ef302223129804d7375983c787f" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.397825 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fff6c4f96-4xg9k" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.421644 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-scripts" (OuterVolumeSpecName: "scripts") pod "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" (UID: "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.431241 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-config-data" (OuterVolumeSpecName: "config-data") pod "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" (UID: "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.432522 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" (UID: "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.456040 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" (UID: "bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.490315 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.490353 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrjw5\" (UniqueName: \"kubernetes.io/projected/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-kube-api-access-wrjw5\") on node \"crc\" DevicePath \"\"" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.490368 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.490379 5050 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.490391 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.490403 5050 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.707426 5050 scope.go:117] "RemoveContainer" containerID="6b37bd809e6f2aa776a372012e3a63a7f5f0c59b3e363caa1146b2bc9ead8576" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.733100 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-fff6c4f96-4xg9k"] Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.739535 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-fff6c4f96-4xg9k"] Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.992609 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmhcb_df6fe3ea-2b3d-426d-b292-f24b6e9f3f05/registry-server/0.log" Jan 31 06:14:44 crc kubenswrapper[5050]: I0131 06:14:44.993481 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.101647 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-catalog-content\") pod \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\" (UID: \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\") " Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.102094 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-utilities\") pod \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\" (UID: \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\") " Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.102127 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsjgz\" (UniqueName: \"kubernetes.io/projected/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-kube-api-access-vsjgz\") pod \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\" (UID: \"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05\") " Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.103736 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-utilities" (OuterVolumeSpecName: "utilities") pod "df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" (UID: "df6fe3ea-2b3d-426d-b292-f24b6e9f3f05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.106105 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-kube-api-access-vsjgz" (OuterVolumeSpecName: "kube-api-access-vsjgz") pod "df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" (UID: "df6fe3ea-2b3d-426d-b292-f24b6e9f3f05"). InnerVolumeSpecName "kube-api-access-vsjgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.204515 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.204543 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsjgz\" (UniqueName: \"kubernetes.io/projected/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-kube-api-access-vsjgz\") on node \"crc\" DevicePath \"\"" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.406916 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmhcb_df6fe3ea-2b3d-426d-b292-f24b6e9f3f05/registry-server/0.log" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.407508 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmhcb" event={"ID":"df6fe3ea-2b3d-426d-b292-f24b6e9f3f05","Type":"ContainerDied","Data":"351760e96dd7fe62b978ffd22fdca528fe3e852f45da428d4a311121619093fe"} Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.407544 5050 scope.go:117] "RemoveContainer" containerID="b0128be30ff648b4dc89af75d4c03efea376a06912b7fb4e91159608db854d5b" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.407584 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmhcb" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.429031 5050 scope.go:117] "RemoveContainer" containerID="0ee6eb3a7828b82441aceb7f09b2c1af87c411aab7f86e726ae6f4005134aa84" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.452938 5050 scope.go:117] "RemoveContainer" containerID="d422a932916c44326af8335c8ace934cf08635e5b1fbc5a5547f8a5de4af7969" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.527873 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" (UID: "df6fe3ea-2b3d-426d-b292-f24b6e9f3f05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.614072 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.749383 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" path="/var/lib/kubelet/pods/bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7/volumes" Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.750037 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmhcb"] Jan 31 06:14:45 crc kubenswrapper[5050]: I0131 06:14:45.750060 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rmhcb"] Jan 31 06:14:47 crc kubenswrapper[5050]: I0131 06:14:47.756110 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" path="/var/lib/kubelet/pods/df6fe3ea-2b3d-426d-b292-f24b6e9f3f05/volumes" Jan 31 06:14:57 crc kubenswrapper[5050]: I0131 06:14:57.736863 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:14:57 crc kubenswrapper[5050]: E0131 06:14:57.738108 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.167492 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb"] Jan 31 06:15:00 crc kubenswrapper[5050]: E0131 06:15:00.168448 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon-log" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.168471 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon-log" Jan 31 06:15:00 crc kubenswrapper[5050]: E0131 06:15:00.168504 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" containerName="registry-server" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.168520 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" containerName="registry-server" Jan 31 06:15:00 crc kubenswrapper[5050]: E0131 06:15:00.168547 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.168563 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon" Jan 31 06:15:00 crc kubenswrapper[5050]: E0131 06:15:00.168597 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" containerName="extract-content" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.168610 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" containerName="extract-content" Jan 31 06:15:00 crc kubenswrapper[5050]: E0131 06:15:00.168644 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" containerName="extract-utilities" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.168657 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" containerName="extract-utilities" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.169010 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.169048 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0f4bc0-6a5c-4b67-9e8f-95bc2caa19a7" containerName="horizon-log" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.169078 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6fe3ea-2b3d-426d-b292-f24b6e9f3f05" containerName="registry-server" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.170164 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.175577 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.175656 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.198763 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb"] Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.268414 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktwlt\" (UniqueName: \"kubernetes.io/projected/f3c89234-930e-4c7a-821c-edd74c21fee0-kube-api-access-ktwlt\") pod \"collect-profiles-29497335-rg4mb\" (UID: \"f3c89234-930e-4c7a-821c-edd74c21fee0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.268487 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3c89234-930e-4c7a-821c-edd74c21fee0-config-volume\") pod \"collect-profiles-29497335-rg4mb\" (UID: \"f3c89234-930e-4c7a-821c-edd74c21fee0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.268732 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3c89234-930e-4c7a-821c-edd74c21fee0-secret-volume\") pod \"collect-profiles-29497335-rg4mb\" (UID: \"f3c89234-930e-4c7a-821c-edd74c21fee0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.371221 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktwlt\" (UniqueName: \"kubernetes.io/projected/f3c89234-930e-4c7a-821c-edd74c21fee0-kube-api-access-ktwlt\") pod \"collect-profiles-29497335-rg4mb\" (UID: \"f3c89234-930e-4c7a-821c-edd74c21fee0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.371301 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3c89234-930e-4c7a-821c-edd74c21fee0-config-volume\") pod \"collect-profiles-29497335-rg4mb\" (UID: \"f3c89234-930e-4c7a-821c-edd74c21fee0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:00 crc kubenswrapper[5050]: I0131 06:15:00.371376 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3c89234-930e-4c7a-821c-edd74c21fee0-secret-volume\") pod \"collect-profiles-29497335-rg4mb\" (UID: \"f3c89234-930e-4c7a-821c-edd74c21fee0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:01 crc kubenswrapper[5050]: I0131 06:15:01.024453 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3c89234-930e-4c7a-821c-edd74c21fee0-config-volume\") pod \"collect-profiles-29497335-rg4mb\" (UID: \"f3c89234-930e-4c7a-821c-edd74c21fee0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:02 crc kubenswrapper[5050]: I0131 06:15:02.045118 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3c89234-930e-4c7a-821c-edd74c21fee0-secret-volume\") pod \"collect-profiles-29497335-rg4mb\" (UID: \"f3c89234-930e-4c7a-821c-edd74c21fee0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:02 crc kubenswrapper[5050]: I0131 06:15:02.045231 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktwlt\" (UniqueName: \"kubernetes.io/projected/f3c89234-930e-4c7a-821c-edd74c21fee0-kube-api-access-ktwlt\") pod \"collect-profiles-29497335-rg4mb\" (UID: \"f3c89234-930e-4c7a-821c-edd74c21fee0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:02 crc kubenswrapper[5050]: I0131 06:15:02.297734 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:02 crc kubenswrapper[5050]: I0131 06:15:02.821331 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb"] Jan 31 06:15:03 crc kubenswrapper[5050]: I0131 06:15:03.608460 5050 generic.go:334] "Generic (PLEG): container finished" podID="f3c89234-930e-4c7a-821c-edd74c21fee0" containerID="ff6cdf4003d886b81b49b283de179d1663a7933241c8c581cada810b1f9c2dcb" exitCode=0 Jan 31 06:15:03 crc kubenswrapper[5050]: I0131 06:15:03.608507 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" event={"ID":"f3c89234-930e-4c7a-821c-edd74c21fee0","Type":"ContainerDied","Data":"ff6cdf4003d886b81b49b283de179d1663a7933241c8c581cada810b1f9c2dcb"} Jan 31 06:15:03 crc kubenswrapper[5050]: I0131 06:15:03.608863 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" event={"ID":"f3c89234-930e-4c7a-821c-edd74c21fee0","Type":"ContainerStarted","Data":"668f48c1f0333d3eecd6445ff92f2d5e09c28d7cc4accdf31b48f67f1afa8e3f"} Jan 31 06:15:04 crc kubenswrapper[5050]: I0131 06:15:04.947819 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:05 crc kubenswrapper[5050]: I0131 06:15:05.067132 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3c89234-930e-4c7a-821c-edd74c21fee0-secret-volume\") pod \"f3c89234-930e-4c7a-821c-edd74c21fee0\" (UID: \"f3c89234-930e-4c7a-821c-edd74c21fee0\") " Jan 31 06:15:05 crc kubenswrapper[5050]: I0131 06:15:05.067232 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktwlt\" (UniqueName: \"kubernetes.io/projected/f3c89234-930e-4c7a-821c-edd74c21fee0-kube-api-access-ktwlt\") pod \"f3c89234-930e-4c7a-821c-edd74c21fee0\" (UID: \"f3c89234-930e-4c7a-821c-edd74c21fee0\") " Jan 31 06:15:05 crc kubenswrapper[5050]: I0131 06:15:05.067319 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3c89234-930e-4c7a-821c-edd74c21fee0-config-volume\") pod \"f3c89234-930e-4c7a-821c-edd74c21fee0\" (UID: \"f3c89234-930e-4c7a-821c-edd74c21fee0\") " Jan 31 06:15:05 crc kubenswrapper[5050]: I0131 06:15:05.068140 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3c89234-930e-4c7a-821c-edd74c21fee0-config-volume" (OuterVolumeSpecName: "config-volume") pod "f3c89234-930e-4c7a-821c-edd74c21fee0" (UID: "f3c89234-930e-4c7a-821c-edd74c21fee0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:15:05 crc kubenswrapper[5050]: I0131 06:15:05.073785 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c89234-930e-4c7a-821c-edd74c21fee0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f3c89234-930e-4c7a-821c-edd74c21fee0" (UID: "f3c89234-930e-4c7a-821c-edd74c21fee0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:15:05 crc kubenswrapper[5050]: I0131 06:15:05.073791 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c89234-930e-4c7a-821c-edd74c21fee0-kube-api-access-ktwlt" (OuterVolumeSpecName: "kube-api-access-ktwlt") pod "f3c89234-930e-4c7a-821c-edd74c21fee0" (UID: "f3c89234-930e-4c7a-821c-edd74c21fee0"). InnerVolumeSpecName "kube-api-access-ktwlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:15:05 crc kubenswrapper[5050]: I0131 06:15:05.170969 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktwlt\" (UniqueName: \"kubernetes.io/projected/f3c89234-930e-4c7a-821c-edd74c21fee0-kube-api-access-ktwlt\") on node \"crc\" DevicePath \"\"" Jan 31 06:15:05 crc kubenswrapper[5050]: I0131 06:15:05.171013 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3c89234-930e-4c7a-821c-edd74c21fee0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 06:15:05 crc kubenswrapper[5050]: I0131 06:15:05.171022 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3c89234-930e-4c7a-821c-edd74c21fee0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 06:15:05 crc kubenswrapper[5050]: I0131 06:15:05.633296 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" event={"ID":"f3c89234-930e-4c7a-821c-edd74c21fee0","Type":"ContainerDied","Data":"668f48c1f0333d3eecd6445ff92f2d5e09c28d7cc4accdf31b48f67f1afa8e3f"} Jan 31 06:15:05 crc kubenswrapper[5050]: I0131 06:15:05.633354 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="668f48c1f0333d3eecd6445ff92f2d5e09c28d7cc4accdf31b48f67f1afa8e3f" Jan 31 06:15:05 crc kubenswrapper[5050]: I0131 06:15:05.633453 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497335-rg4mb" Jan 31 06:15:06 crc kubenswrapper[5050]: I0131 06:15:06.029772 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj"] Jan 31 06:15:06 crc kubenswrapper[5050]: I0131 06:15:06.037326 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497290-mbzhj"] Jan 31 06:15:07 crc kubenswrapper[5050]: I0131 06:15:07.747915 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="547b8fdc-88c9-450a-9938-c17102596558" path="/var/lib/kubelet/pods/547b8fdc-88c9-450a-9938-c17102596558/volumes" Jan 31 06:15:09 crc kubenswrapper[5050]: I0131 06:15:09.736741 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:15:09 crc kubenswrapper[5050]: E0131 06:15:09.738039 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:15:23 crc kubenswrapper[5050]: I0131 06:15:23.737140 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:15:23 crc kubenswrapper[5050]: E0131 06:15:23.737971 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:15:31 crc kubenswrapper[5050]: I0131 06:15:31.902501 5050 scope.go:117] "RemoveContainer" containerID="536423c698e5681965e0298a31d7b6268d7e4eb34a4a91e84e8f54d9c00aff3d" Jan 31 06:15:36 crc kubenswrapper[5050]: I0131 06:15:36.739112 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:15:36 crc kubenswrapper[5050]: E0131 06:15:36.740560 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.251016 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c9xmn"] Jan 31 06:15:46 crc kubenswrapper[5050]: E0131 06:15:46.252311 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c89234-930e-4c7a-821c-edd74c21fee0" containerName="collect-profiles" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.252335 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c89234-930e-4c7a-821c-edd74c21fee0" containerName="collect-profiles" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.252675 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c89234-930e-4c7a-821c-edd74c21fee0" containerName="collect-profiles" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.254942 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.268389 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c9xmn"] Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.428079 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c96568f-0200-4700-99cc-9c386d4fd176-utilities\") pod \"community-operators-c9xmn\" (UID: \"3c96568f-0200-4700-99cc-9c386d4fd176\") " pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.428200 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdnc\" (UniqueName: \"kubernetes.io/projected/3c96568f-0200-4700-99cc-9c386d4fd176-kube-api-access-htdnc\") pod \"community-operators-c9xmn\" (UID: \"3c96568f-0200-4700-99cc-9c386d4fd176\") " pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.428298 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c96568f-0200-4700-99cc-9c386d4fd176-catalog-content\") pod \"community-operators-c9xmn\" (UID: \"3c96568f-0200-4700-99cc-9c386d4fd176\") " pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.530281 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c96568f-0200-4700-99cc-9c386d4fd176-catalog-content\") pod \"community-operators-c9xmn\" (UID: \"3c96568f-0200-4700-99cc-9c386d4fd176\") " pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.530723 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c96568f-0200-4700-99cc-9c386d4fd176-utilities\") pod \"community-operators-c9xmn\" (UID: \"3c96568f-0200-4700-99cc-9c386d4fd176\") " pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.530816 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htdnc\" (UniqueName: \"kubernetes.io/projected/3c96568f-0200-4700-99cc-9c386d4fd176-kube-api-access-htdnc\") pod \"community-operators-c9xmn\" (UID: \"3c96568f-0200-4700-99cc-9c386d4fd176\") " pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.532043 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c96568f-0200-4700-99cc-9c386d4fd176-catalog-content\") pod \"community-operators-c9xmn\" (UID: \"3c96568f-0200-4700-99cc-9c386d4fd176\") " pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.532273 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c96568f-0200-4700-99cc-9c386d4fd176-utilities\") pod \"community-operators-c9xmn\" (UID: \"3c96568f-0200-4700-99cc-9c386d4fd176\") " pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.551893 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htdnc\" (UniqueName: \"kubernetes.io/projected/3c96568f-0200-4700-99cc-9c386d4fd176-kube-api-access-htdnc\") pod \"community-operators-c9xmn\" (UID: \"3c96568f-0200-4700-99cc-9c386d4fd176\") " pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:15:46 crc kubenswrapper[5050]: I0131 06:15:46.590577 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:15:47 crc kubenswrapper[5050]: I0131 06:15:47.168169 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c9xmn"] Jan 31 06:15:48 crc kubenswrapper[5050]: I0131 06:15:48.020511 5050 generic.go:334] "Generic (PLEG): container finished" podID="3c96568f-0200-4700-99cc-9c386d4fd176" containerID="844535206da5676005133002388abdc9c413ddef0ebc08bff1d5c6609e024ff3" exitCode=0 Jan 31 06:15:48 crc kubenswrapper[5050]: I0131 06:15:48.020628 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9xmn" event={"ID":"3c96568f-0200-4700-99cc-9c386d4fd176","Type":"ContainerDied","Data":"844535206da5676005133002388abdc9c413ddef0ebc08bff1d5c6609e024ff3"} Jan 31 06:15:48 crc kubenswrapper[5050]: I0131 06:15:48.020922 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9xmn" event={"ID":"3c96568f-0200-4700-99cc-9c386d4fd176","Type":"ContainerStarted","Data":"2949813b98041467f6bf90f9aed7755ecd28dab12dcced3d0de9fa995e548caf"} Jan 31 06:15:48 crc kubenswrapper[5050]: I0131 06:15:48.736812 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:15:50 crc kubenswrapper[5050]: I0131 06:15:50.042716 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"94806851aee0134f3d90df4319b62fedbb74408bf4a52f75abe44a79e6de8a38"} Jan 31 06:15:59 crc kubenswrapper[5050]: I0131 06:15:59.116051 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9xmn" event={"ID":"3c96568f-0200-4700-99cc-9c386d4fd176","Type":"ContainerStarted","Data":"0966ba3f6354dacb42faeca1485e606348e8360fb1b681db7564190a18e90967"} Jan 31 06:16:00 crc kubenswrapper[5050]: I0131 06:16:00.127416 5050 generic.go:334] "Generic (PLEG): container finished" podID="3c96568f-0200-4700-99cc-9c386d4fd176" containerID="0966ba3f6354dacb42faeca1485e606348e8360fb1b681db7564190a18e90967" exitCode=0 Jan 31 06:16:00 crc kubenswrapper[5050]: I0131 06:16:00.127723 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9xmn" event={"ID":"3c96568f-0200-4700-99cc-9c386d4fd176","Type":"ContainerDied","Data":"0966ba3f6354dacb42faeca1485e606348e8360fb1b681db7564190a18e90967"} Jan 31 06:16:07 crc kubenswrapper[5050]: I0131 06:16:07.199355 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9xmn" event={"ID":"3c96568f-0200-4700-99cc-9c386d4fd176","Type":"ContainerStarted","Data":"8c0b1fc85bd351109526d29aa7aeac01ce09a678c77201773d458375396694e1"} Jan 31 06:16:09 crc kubenswrapper[5050]: I0131 06:16:09.265831 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c9xmn" podStartSLOduration=5.796086446 podStartE2EDuration="23.26580974s" podCreationTimestamp="2026-01-31 06:15:46 +0000 UTC" firstStartedPulling="2026-01-31 06:15:48.024914565 +0000 UTC m=+3273.074076171" lastFinishedPulling="2026-01-31 06:16:05.494637869 +0000 UTC m=+3290.543799465" observedRunningTime="2026-01-31 06:16:09.259218301 +0000 UTC m=+3294.308379917" watchObservedRunningTime="2026-01-31 06:16:09.26580974 +0000 UTC m=+3294.314971336" Jan 31 06:16:16 crc kubenswrapper[5050]: I0131 06:16:16.591373 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:16:16 crc kubenswrapper[5050]: I0131 06:16:16.591988 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:16:17 crc kubenswrapper[5050]: I0131 06:16:17.662380 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c9xmn" podUID="3c96568f-0200-4700-99cc-9c386d4fd176" containerName="registry-server" probeResult="failure" output=< Jan 31 06:16:17 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 06:16:17 crc kubenswrapper[5050]: > Jan 31 06:16:27 crc kubenswrapper[5050]: I0131 06:16:27.639875 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c9xmn" podUID="3c96568f-0200-4700-99cc-9c386d4fd176" containerName="registry-server" probeResult="failure" output=< Jan 31 06:16:27 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 06:16:27 crc kubenswrapper[5050]: > Jan 31 06:16:37 crc kubenswrapper[5050]: I0131 06:16:37.635638 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c9xmn" podUID="3c96568f-0200-4700-99cc-9c386d4fd176" containerName="registry-server" probeResult="failure" output=< Jan 31 06:16:37 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 06:16:37 crc kubenswrapper[5050]: > Jan 31 06:16:47 crc kubenswrapper[5050]: I0131 06:16:47.664408 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c9xmn" podUID="3c96568f-0200-4700-99cc-9c386d4fd176" containerName="registry-server" probeResult="failure" output=< Jan 31 06:16:47 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 06:16:47 crc kubenswrapper[5050]: > Jan 31 06:16:57 crc kubenswrapper[5050]: I0131 06:16:57.634626 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c9xmn" podUID="3c96568f-0200-4700-99cc-9c386d4fd176" containerName="registry-server" probeResult="failure" output=< Jan 31 06:16:57 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 06:16:57 crc kubenswrapper[5050]: > Jan 31 06:17:06 crc kubenswrapper[5050]: I0131 06:17:06.642870 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:17:06 crc kubenswrapper[5050]: I0131 06:17:06.688154 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c9xmn" Jan 31 06:17:06 crc kubenswrapper[5050]: I0131 06:17:06.771924 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c9xmn"] Jan 31 06:17:06 crc kubenswrapper[5050]: I0131 06:17:06.884584 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4cw5k"] Jan 31 06:17:06 crc kubenswrapper[5050]: I0131 06:17:06.884840 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4cw5k" podUID="3a56f313-7ca2-4e38-a80b-6395af5eebde" containerName="registry-server" containerID="cri-o://daf1c28b8c46587dce2faaecca9fe6e6ea59d89577788e7dea04e339b642318d" gracePeriod=2 Jan 31 06:17:07 crc kubenswrapper[5050]: I0131 06:17:07.781849 5050 generic.go:334] "Generic (PLEG): container finished" podID="3a56f313-7ca2-4e38-a80b-6395af5eebde" containerID="daf1c28b8c46587dce2faaecca9fe6e6ea59d89577788e7dea04e339b642318d" exitCode=0 Jan 31 06:17:07 crc kubenswrapper[5050]: I0131 06:17:07.781929 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cw5k" event={"ID":"3a56f313-7ca2-4e38-a80b-6395af5eebde","Type":"ContainerDied","Data":"daf1c28b8c46587dce2faaecca9fe6e6ea59d89577788e7dea04e339b642318d"} Jan 31 06:17:07 crc kubenswrapper[5050]: I0131 06:17:07.992433 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4cw5k" Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.085156 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5n4c\" (UniqueName: \"kubernetes.io/projected/3a56f313-7ca2-4e38-a80b-6395af5eebde-kube-api-access-c5n4c\") pod \"3a56f313-7ca2-4e38-a80b-6395af5eebde\" (UID: \"3a56f313-7ca2-4e38-a80b-6395af5eebde\") " Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.086184 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a56f313-7ca2-4e38-a80b-6395af5eebde-utilities\") pod \"3a56f313-7ca2-4e38-a80b-6395af5eebde\" (UID: \"3a56f313-7ca2-4e38-a80b-6395af5eebde\") " Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.086280 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a56f313-7ca2-4e38-a80b-6395af5eebde-catalog-content\") pod \"3a56f313-7ca2-4e38-a80b-6395af5eebde\" (UID: \"3a56f313-7ca2-4e38-a80b-6395af5eebde\") " Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.086924 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a56f313-7ca2-4e38-a80b-6395af5eebde-utilities" (OuterVolumeSpecName: "utilities") pod "3a56f313-7ca2-4e38-a80b-6395af5eebde" (UID: "3a56f313-7ca2-4e38-a80b-6395af5eebde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.095224 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a56f313-7ca2-4e38-a80b-6395af5eebde-kube-api-access-c5n4c" (OuterVolumeSpecName: "kube-api-access-c5n4c") pod "3a56f313-7ca2-4e38-a80b-6395af5eebde" (UID: "3a56f313-7ca2-4e38-a80b-6395af5eebde"). InnerVolumeSpecName "kube-api-access-c5n4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.130476 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a56f313-7ca2-4e38-a80b-6395af5eebde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a56f313-7ca2-4e38-a80b-6395af5eebde" (UID: "3a56f313-7ca2-4e38-a80b-6395af5eebde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.188203 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5n4c\" (UniqueName: \"kubernetes.io/projected/3a56f313-7ca2-4e38-a80b-6395af5eebde-kube-api-access-c5n4c\") on node \"crc\" DevicePath \"\"" Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.188236 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a56f313-7ca2-4e38-a80b-6395af5eebde-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.188245 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a56f313-7ca2-4e38-a80b-6395af5eebde-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.792310 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cw5k" event={"ID":"3a56f313-7ca2-4e38-a80b-6395af5eebde","Type":"ContainerDied","Data":"396333ea8f1c2aa31514279544d89e385ed360edde6817d86ad51a9ea1694fc4"} Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.792376 5050 scope.go:117] "RemoveContainer" containerID="daf1c28b8c46587dce2faaecca9fe6e6ea59d89577788e7dea04e339b642318d" Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.792407 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4cw5k" Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.830132 5050 scope.go:117] "RemoveContainer" containerID="679de2a2668b3736898e4e2ef45a9e9f52b71fb47e92680d3150c7ba66410648" Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.842637 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4cw5k"] Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.850740 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4cw5k"] Jan 31 06:17:08 crc kubenswrapper[5050]: I0131 06:17:08.982864 5050 scope.go:117] "RemoveContainer" containerID="d3437986eb479e2cd19d0a2d9e2be39dfaefd934f635b9230400c906a149ee2b" Jan 31 06:17:09 crc kubenswrapper[5050]: I0131 06:17:09.747247 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a56f313-7ca2-4e38-a80b-6395af5eebde" path="/var/lib/kubelet/pods/3a56f313-7ca2-4e38-a80b-6395af5eebde/volumes" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.699780 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kw94v"] Jan 31 06:17:18 crc kubenswrapper[5050]: E0131 06:17:18.700851 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a56f313-7ca2-4e38-a80b-6395af5eebde" containerName="registry-server" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.700869 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a56f313-7ca2-4e38-a80b-6395af5eebde" containerName="registry-server" Jan 31 06:17:18 crc kubenswrapper[5050]: E0131 06:17:18.700898 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a56f313-7ca2-4e38-a80b-6395af5eebde" containerName="extract-content" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.700907 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a56f313-7ca2-4e38-a80b-6395af5eebde" containerName="extract-content" Jan 31 06:17:18 crc kubenswrapper[5050]: E0131 06:17:18.700930 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a56f313-7ca2-4e38-a80b-6395af5eebde" containerName="extract-utilities" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.700942 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a56f313-7ca2-4e38-a80b-6395af5eebde" containerName="extract-utilities" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.701263 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a56f313-7ca2-4e38-a80b-6395af5eebde" containerName="registry-server" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.703178 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.710228 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kw94v"] Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.808570 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjszg\" (UniqueName: \"kubernetes.io/projected/ea339936-b25c-4903-b704-85f451b738c6-kube-api-access-bjszg\") pod \"certified-operators-kw94v\" (UID: \"ea339936-b25c-4903-b704-85f451b738c6\") " pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.808967 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea339936-b25c-4903-b704-85f451b738c6-utilities\") pod \"certified-operators-kw94v\" (UID: \"ea339936-b25c-4903-b704-85f451b738c6\") " pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.809163 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea339936-b25c-4903-b704-85f451b738c6-catalog-content\") pod \"certified-operators-kw94v\" (UID: \"ea339936-b25c-4903-b704-85f451b738c6\") " pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.911234 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea339936-b25c-4903-b704-85f451b738c6-catalog-content\") pod \"certified-operators-kw94v\" (UID: \"ea339936-b25c-4903-b704-85f451b738c6\") " pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.911454 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjszg\" (UniqueName: \"kubernetes.io/projected/ea339936-b25c-4903-b704-85f451b738c6-kube-api-access-bjszg\") pod \"certified-operators-kw94v\" (UID: \"ea339936-b25c-4903-b704-85f451b738c6\") " pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.911693 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea339936-b25c-4903-b704-85f451b738c6-utilities\") pod \"certified-operators-kw94v\" (UID: \"ea339936-b25c-4903-b704-85f451b738c6\") " pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.911719 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea339936-b25c-4903-b704-85f451b738c6-catalog-content\") pod \"certified-operators-kw94v\" (UID: \"ea339936-b25c-4903-b704-85f451b738c6\") " pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.912240 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea339936-b25c-4903-b704-85f451b738c6-utilities\") pod \"certified-operators-kw94v\" (UID: \"ea339936-b25c-4903-b704-85f451b738c6\") " pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:18 crc kubenswrapper[5050]: I0131 06:17:18.940015 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjszg\" (UniqueName: \"kubernetes.io/projected/ea339936-b25c-4903-b704-85f451b738c6-kube-api-access-bjszg\") pod \"certified-operators-kw94v\" (UID: \"ea339936-b25c-4903-b704-85f451b738c6\") " pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:19 crc kubenswrapper[5050]: I0131 06:17:19.020255 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:19 crc kubenswrapper[5050]: I0131 06:17:19.598792 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kw94v"] Jan 31 06:17:19 crc kubenswrapper[5050]: I0131 06:17:19.887748 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw94v" event={"ID":"ea339936-b25c-4903-b704-85f451b738c6","Type":"ContainerStarted","Data":"c557888e942db7725c69d1c00c627b90d629baa2c80f610f9ddfb854fc2cde75"} Jan 31 06:17:20 crc kubenswrapper[5050]: I0131 06:17:20.897746 5050 generic.go:334] "Generic (PLEG): container finished" podID="ea339936-b25c-4903-b704-85f451b738c6" containerID="2e65d2b0bccd1b8e54eeff53f7aef98562fe951a8c39cc16a7ea1f06f675c307" exitCode=0 Jan 31 06:17:20 crc kubenswrapper[5050]: I0131 06:17:20.897791 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw94v" event={"ID":"ea339936-b25c-4903-b704-85f451b738c6","Type":"ContainerDied","Data":"2e65d2b0bccd1b8e54eeff53f7aef98562fe951a8c39cc16a7ea1f06f675c307"} Jan 31 06:17:23 crc kubenswrapper[5050]: I0131 06:17:23.924741 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw94v" event={"ID":"ea339936-b25c-4903-b704-85f451b738c6","Type":"ContainerStarted","Data":"7eaea5b395b5e04ed93517b71a9bb51d20c874e5d1008bad4cfb4c6117cbbe71"} Jan 31 06:17:24 crc kubenswrapper[5050]: I0131 06:17:24.939661 5050 generic.go:334] "Generic (PLEG): container finished" podID="ea339936-b25c-4903-b704-85f451b738c6" containerID="7eaea5b395b5e04ed93517b71a9bb51d20c874e5d1008bad4cfb4c6117cbbe71" exitCode=0 Jan 31 06:17:24 crc kubenswrapper[5050]: I0131 06:17:24.939854 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw94v" event={"ID":"ea339936-b25c-4903-b704-85f451b738c6","Type":"ContainerDied","Data":"7eaea5b395b5e04ed93517b71a9bb51d20c874e5d1008bad4cfb4c6117cbbe71"} Jan 31 06:17:28 crc kubenswrapper[5050]: I0131 06:17:28.983962 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw94v" event={"ID":"ea339936-b25c-4903-b704-85f451b738c6","Type":"ContainerStarted","Data":"534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03"} Jan 31 06:17:29 crc kubenswrapper[5050]: I0131 06:17:29.008115 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kw94v" podStartSLOduration=3.930172509 podStartE2EDuration="11.008083991s" podCreationTimestamp="2026-01-31 06:17:18 +0000 UTC" firstStartedPulling="2026-01-31 06:17:20.899871182 +0000 UTC m=+3365.949032778" lastFinishedPulling="2026-01-31 06:17:27.977782664 +0000 UTC m=+3373.026944260" observedRunningTime="2026-01-31 06:17:29.000729325 +0000 UTC m=+3374.049890931" watchObservedRunningTime="2026-01-31 06:17:29.008083991 +0000 UTC m=+3374.057245587" Jan 31 06:17:29 crc kubenswrapper[5050]: I0131 06:17:29.021521 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:29 crc kubenswrapper[5050]: I0131 06:17:29.024144 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:30 crc kubenswrapper[5050]: I0131 06:17:30.070723 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kw94v" podUID="ea339936-b25c-4903-b704-85f451b738c6" containerName="registry-server" probeResult="failure" output=< Jan 31 06:17:30 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 06:17:30 crc kubenswrapper[5050]: > Jan 31 06:17:39 crc kubenswrapper[5050]: I0131 06:17:39.081193 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:39 crc kubenswrapper[5050]: I0131 06:17:39.141488 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:39 crc kubenswrapper[5050]: I0131 06:17:39.349582 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kw94v"] Jan 31 06:17:41 crc kubenswrapper[5050]: I0131 06:17:41.091178 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kw94v" podUID="ea339936-b25c-4903-b704-85f451b738c6" containerName="registry-server" containerID="cri-o://534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03" gracePeriod=2 Jan 31 06:17:41 crc kubenswrapper[5050]: I0131 06:17:41.836768 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:41 crc kubenswrapper[5050]: I0131 06:17:41.912198 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjszg\" (UniqueName: \"kubernetes.io/projected/ea339936-b25c-4903-b704-85f451b738c6-kube-api-access-bjszg\") pod \"ea339936-b25c-4903-b704-85f451b738c6\" (UID: \"ea339936-b25c-4903-b704-85f451b738c6\") " Jan 31 06:17:41 crc kubenswrapper[5050]: I0131 06:17:41.912430 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea339936-b25c-4903-b704-85f451b738c6-catalog-content\") pod \"ea339936-b25c-4903-b704-85f451b738c6\" (UID: \"ea339936-b25c-4903-b704-85f451b738c6\") " Jan 31 06:17:41 crc kubenswrapper[5050]: I0131 06:17:41.912745 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea339936-b25c-4903-b704-85f451b738c6-utilities\") pod \"ea339936-b25c-4903-b704-85f451b738c6\" (UID: \"ea339936-b25c-4903-b704-85f451b738c6\") " Jan 31 06:17:41 crc kubenswrapper[5050]: I0131 06:17:41.913997 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea339936-b25c-4903-b704-85f451b738c6-utilities" (OuterVolumeSpecName: "utilities") pod "ea339936-b25c-4903-b704-85f451b738c6" (UID: "ea339936-b25c-4903-b704-85f451b738c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:17:41 crc kubenswrapper[5050]: I0131 06:17:41.920886 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea339936-b25c-4903-b704-85f451b738c6-kube-api-access-bjszg" (OuterVolumeSpecName: "kube-api-access-bjszg") pod "ea339936-b25c-4903-b704-85f451b738c6" (UID: "ea339936-b25c-4903-b704-85f451b738c6"). InnerVolumeSpecName "kube-api-access-bjszg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:17:41 crc kubenswrapper[5050]: I0131 06:17:41.970305 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea339936-b25c-4903-b704-85f451b738c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea339936-b25c-4903-b704-85f451b738c6" (UID: "ea339936-b25c-4903-b704-85f451b738c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.015169 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea339936-b25c-4903-b704-85f451b738c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.015222 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjszg\" (UniqueName: \"kubernetes.io/projected/ea339936-b25c-4903-b704-85f451b738c6-kube-api-access-bjszg\") on node \"crc\" DevicePath \"\"" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.015234 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea339936-b25c-4903-b704-85f451b738c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.104015 5050 generic.go:334] "Generic (PLEG): container finished" podID="ea339936-b25c-4903-b704-85f451b738c6" containerID="534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03" exitCode=0 Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.104075 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw94v" event={"ID":"ea339936-b25c-4903-b704-85f451b738c6","Type":"ContainerDied","Data":"534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03"} Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.104119 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw94v" event={"ID":"ea339936-b25c-4903-b704-85f451b738c6","Type":"ContainerDied","Data":"c557888e942db7725c69d1c00c627b90d629baa2c80f610f9ddfb854fc2cde75"} Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.104146 5050 scope.go:117] "RemoveContainer" containerID="534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.104153 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kw94v" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.130028 5050 scope.go:117] "RemoveContainer" containerID="7eaea5b395b5e04ed93517b71a9bb51d20c874e5d1008bad4cfb4c6117cbbe71" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.146647 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kw94v"] Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.154752 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kw94v"] Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.174101 5050 scope.go:117] "RemoveContainer" containerID="2e65d2b0bccd1b8e54eeff53f7aef98562fe951a8c39cc16a7ea1f06f675c307" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.209525 5050 scope.go:117] "RemoveContainer" containerID="534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03" Jan 31 06:17:42 crc kubenswrapper[5050]: E0131 06:17:42.209867 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03\": container with ID starting with 534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03 not found: ID does not exist" containerID="534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.209905 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03"} err="failed to get container status \"534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03\": rpc error: code = NotFound desc = could not find container \"534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03\": container with ID starting with 534a36bb33a2fd0a058727ce18c2602b3662dc29dd8722f156ea7c865289ea03 not found: ID does not exist" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.209928 5050 scope.go:117] "RemoveContainer" containerID="7eaea5b395b5e04ed93517b71a9bb51d20c874e5d1008bad4cfb4c6117cbbe71" Jan 31 06:17:42 crc kubenswrapper[5050]: E0131 06:17:42.210205 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eaea5b395b5e04ed93517b71a9bb51d20c874e5d1008bad4cfb4c6117cbbe71\": container with ID starting with 7eaea5b395b5e04ed93517b71a9bb51d20c874e5d1008bad4cfb4c6117cbbe71 not found: ID does not exist" containerID="7eaea5b395b5e04ed93517b71a9bb51d20c874e5d1008bad4cfb4c6117cbbe71" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.210236 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eaea5b395b5e04ed93517b71a9bb51d20c874e5d1008bad4cfb4c6117cbbe71"} err="failed to get container status \"7eaea5b395b5e04ed93517b71a9bb51d20c874e5d1008bad4cfb4c6117cbbe71\": rpc error: code = NotFound desc = could not find container \"7eaea5b395b5e04ed93517b71a9bb51d20c874e5d1008bad4cfb4c6117cbbe71\": container with ID starting with 7eaea5b395b5e04ed93517b71a9bb51d20c874e5d1008bad4cfb4c6117cbbe71 not found: ID does not exist" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.210265 5050 scope.go:117] "RemoveContainer" containerID="2e65d2b0bccd1b8e54eeff53f7aef98562fe951a8c39cc16a7ea1f06f675c307" Jan 31 06:17:42 crc kubenswrapper[5050]: E0131 06:17:42.210602 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e65d2b0bccd1b8e54eeff53f7aef98562fe951a8c39cc16a7ea1f06f675c307\": container with ID starting with 2e65d2b0bccd1b8e54eeff53f7aef98562fe951a8c39cc16a7ea1f06f675c307 not found: ID does not exist" containerID="2e65d2b0bccd1b8e54eeff53f7aef98562fe951a8c39cc16a7ea1f06f675c307" Jan 31 06:17:42 crc kubenswrapper[5050]: I0131 06:17:42.210646 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e65d2b0bccd1b8e54eeff53f7aef98562fe951a8c39cc16a7ea1f06f675c307"} err="failed to get container status \"2e65d2b0bccd1b8e54eeff53f7aef98562fe951a8c39cc16a7ea1f06f675c307\": rpc error: code = NotFound desc = could not find container \"2e65d2b0bccd1b8e54eeff53f7aef98562fe951a8c39cc16a7ea1f06f675c307\": container with ID starting with 2e65d2b0bccd1b8e54eeff53f7aef98562fe951a8c39cc16a7ea1f06f675c307 not found: ID does not exist" Jan 31 06:17:43 crc kubenswrapper[5050]: I0131 06:17:43.749279 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea339936-b25c-4903-b704-85f451b738c6" path="/var/lib/kubelet/pods/ea339936-b25c-4903-b704-85f451b738c6/volumes" Jan 31 06:18:09 crc kubenswrapper[5050]: I0131 06:18:09.018513 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:18:09 crc kubenswrapper[5050]: I0131 06:18:09.020705 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:18:39 crc kubenswrapper[5050]: I0131 06:18:39.017988 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:18:39 crc kubenswrapper[5050]: I0131 06:18:39.018523 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:18:57 crc kubenswrapper[5050]: I0131 06:18:57.756206 5050 generic.go:334] "Generic (PLEG): container finished" podID="adc7d8ad-779c-4340-b51c-01a232f106b8" containerID="739610c0af77343b6d04ee89e1c7717b5105ab129c905b9680d604973762e52b" exitCode=0 Jan 31 06:18:57 crc kubenswrapper[5050]: I0131 06:18:57.756292 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-bcp7s" event={"ID":"adc7d8ad-779c-4340-b51c-01a232f106b8","Type":"ContainerDied","Data":"739610c0af77343b6d04ee89e1c7717b5105ab129c905b9680d604973762e52b"} Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.216620 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-bcp7s" Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.382291 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6flc\" (UniqueName: \"kubernetes.io/projected/adc7d8ad-779c-4340-b51c-01a232f106b8-kube-api-access-c6flc\") pod \"adc7d8ad-779c-4340-b51c-01a232f106b8\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.382406 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-config-data\") pod \"adc7d8ad-779c-4340-b51c-01a232f106b8\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.382481 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-combined-ca-bundle\") pod \"adc7d8ad-779c-4340-b51c-01a232f106b8\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.382589 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-job-config-data\") pod \"adc7d8ad-779c-4340-b51c-01a232f106b8\" (UID: \"adc7d8ad-779c-4340-b51c-01a232f106b8\") " Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.389888 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adc7d8ad-779c-4340-b51c-01a232f106b8-kube-api-access-c6flc" (OuterVolumeSpecName: "kube-api-access-c6flc") pod "adc7d8ad-779c-4340-b51c-01a232f106b8" (UID: "adc7d8ad-779c-4340-b51c-01a232f106b8"). InnerVolumeSpecName "kube-api-access-c6flc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.391876 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-config-data" (OuterVolumeSpecName: "config-data") pod "adc7d8ad-779c-4340-b51c-01a232f106b8" (UID: "adc7d8ad-779c-4340-b51c-01a232f106b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.395102 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "adc7d8ad-779c-4340-b51c-01a232f106b8" (UID: "adc7d8ad-779c-4340-b51c-01a232f106b8"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.411479 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adc7d8ad-779c-4340-b51c-01a232f106b8" (UID: "adc7d8ad-779c-4340-b51c-01a232f106b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.484698 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6flc\" (UniqueName: \"kubernetes.io/projected/adc7d8ad-779c-4340-b51c-01a232f106b8-kube-api-access-c6flc\") on node \"crc\" DevicePath \"\"" Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.484746 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.484760 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.484774 5050 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/adc7d8ad-779c-4340-b51c-01a232f106b8-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.774104 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-bcp7s" event={"ID":"adc7d8ad-779c-4340-b51c-01a232f106b8","Type":"ContainerDied","Data":"d1be62e7521c1377d654969d82bd5d78d82bdc5015e229f63f24c42f0ee98171"} Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.774148 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1be62e7521c1377d654969d82bd5d78d82bdc5015e229f63f24c42f0ee98171" Jan 31 06:18:59 crc kubenswrapper[5050]: I0131 06:18:59.774212 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-bcp7s" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.131796 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 06:19:00 crc kubenswrapper[5050]: E0131 06:19:00.132210 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea339936-b25c-4903-b704-85f451b738c6" containerName="registry-server" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.132226 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea339936-b25c-4903-b704-85f451b738c6" containerName="registry-server" Jan 31 06:19:00 crc kubenswrapper[5050]: E0131 06:19:00.132258 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc7d8ad-779c-4340-b51c-01a232f106b8" containerName="manila-db-sync" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.132265 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc7d8ad-779c-4340-b51c-01a232f106b8" containerName="manila-db-sync" Jan 31 06:19:00 crc kubenswrapper[5050]: E0131 06:19:00.132280 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea339936-b25c-4903-b704-85f451b738c6" containerName="extract-utilities" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.132286 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea339936-b25c-4903-b704-85f451b738c6" containerName="extract-utilities" Jan 31 06:19:00 crc kubenswrapper[5050]: E0131 06:19:00.132299 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea339936-b25c-4903-b704-85f451b738c6" containerName="extract-content" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.132304 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea339936-b25c-4903-b704-85f451b738c6" containerName="extract-content" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.132494 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea339936-b25c-4903-b704-85f451b738c6" containerName="registry-server" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.132551 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc7d8ad-779c-4340-b51c-01a232f106b8" containerName="manila-db-sync" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.133544 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.138390 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-mjmd7" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.138776 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.138780 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.145979 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.167719 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-twh2n"] Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.169731 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.217459 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.234028 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-twh2n"] Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.267523 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.269568 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.279044 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.279716 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.300742 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.300801 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljq4h\" (UniqueName: \"kubernetes.io/projected/2b497e20-3e4d-4df3-9194-f922711eb66c-kube-api-access-ljq4h\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.300890 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5njc\" (UniqueName: \"kubernetes.io/projected/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-kube-api-access-n5njc\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.300928 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.300972 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-config-data\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.301014 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.301036 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.301075 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-scripts\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.301099 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.301137 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.301235 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.301286 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-config\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.409902 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.409969 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljq4h\" (UniqueName: \"kubernetes.io/projected/2b497e20-3e4d-4df3-9194-f922711eb66c-kube-api-access-ljq4h\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410011 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410183 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410270 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-scripts\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410303 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92840d80-f5bb-4a24-b9d9-95d876fe9bda-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410340 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5njc\" (UniqueName: \"kubernetes.io/projected/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-kube-api-access-n5njc\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410387 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/92840d80-f5bb-4a24-b9d9-95d876fe9bda-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410419 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-config-data\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410449 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410476 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-config-data\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410575 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410609 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410659 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-scripts\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410697 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410740 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.410870 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.411041 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdmtl\" (UniqueName: \"kubernetes.io/projected/92840d80-f5bb-4a24-b9d9-95d876fe9bda-kube-api-access-hdmtl\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.411097 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-config\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.411134 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/92840d80-f5bb-4a24-b9d9-95d876fe9bda-ceph\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.412937 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.412962 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-config\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.413221 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.413358 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.414929 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b497e20-3e4d-4df3-9194-f922711eb66c-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.415038 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.418611 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-scripts\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.419242 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.422399 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-config-data\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.428520 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.431700 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljq4h\" (UniqueName: \"kubernetes.io/projected/2b497e20-3e4d-4df3-9194-f922711eb66c-kube-api-access-ljq4h\") pod \"dnsmasq-dns-76b5fdb995-twh2n\" (UID: \"2b497e20-3e4d-4df3-9194-f922711eb66c\") " pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.431963 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5njc\" (UniqueName: \"kubernetes.io/projected/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-kube-api-access-n5njc\") pod \"manila-scheduler-0\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.496045 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.512643 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.512806 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.512838 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-scripts\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.512878 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92840d80-f5bb-4a24-b9d9-95d876fe9bda-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.512911 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/92840d80-f5bb-4a24-b9d9-95d876fe9bda-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.512935 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-config-data\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.513174 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdmtl\" (UniqueName: \"kubernetes.io/projected/92840d80-f5bb-4a24-b9d9-95d876fe9bda-kube-api-access-hdmtl\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.513218 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/92840d80-f5bb-4a24-b9d9-95d876fe9bda-ceph\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.513405 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92840d80-f5bb-4a24-b9d9-95d876fe9bda-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.517554 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/92840d80-f5bb-4a24-b9d9-95d876fe9bda-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.518486 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.526709 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.526701 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-config-data\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.527082 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/92840d80-f5bb-4a24-b9d9-95d876fe9bda-ceph\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.529617 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.530531 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-scripts\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.556722 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdmtl\" (UniqueName: \"kubernetes.io/projected/92840d80-f5bb-4a24-b9d9-95d876fe9bda-kube-api-access-hdmtl\") pod \"manila-share-share1-0\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.589633 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.591718 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.600456 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.601131 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.604075 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.724443 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.724536 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-config-data-custom\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.724669 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beeab7c7-5332-4b42-a463-adcaa1751ec3-logs\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.724808 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqh99\" (UniqueName: \"kubernetes.io/projected/beeab7c7-5332-4b42-a463-adcaa1751ec3-kube-api-access-wqh99\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.724911 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/beeab7c7-5332-4b42-a463-adcaa1751ec3-etc-machine-id\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.725015 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-config-data\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.725406 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-scripts\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.827419 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-config-data\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.827530 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-scripts\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.827657 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.827711 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-config-data-custom\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.827767 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beeab7c7-5332-4b42-a463-adcaa1751ec3-logs\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.827858 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqh99\" (UniqueName: \"kubernetes.io/projected/beeab7c7-5332-4b42-a463-adcaa1751ec3-kube-api-access-wqh99\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.827921 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/beeab7c7-5332-4b42-a463-adcaa1751ec3-etc-machine-id\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.828036 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/beeab7c7-5332-4b42-a463-adcaa1751ec3-etc-machine-id\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.829017 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beeab7c7-5332-4b42-a463-adcaa1751ec3-logs\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.836723 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-scripts\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.837408 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-config-data-custom\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.838065 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.839666 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-config-data\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.853385 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqh99\" (UniqueName: \"kubernetes.io/projected/beeab7c7-5332-4b42-a463-adcaa1751ec3-kube-api-access-wqh99\") pod \"manila-api-0\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " pod="openstack/manila-api-0" Jan 31 06:19:00 crc kubenswrapper[5050]: I0131 06:19:00.990276 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 06:19:01 crc kubenswrapper[5050]: I0131 06:19:01.119657 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 06:19:01 crc kubenswrapper[5050]: I0131 06:19:01.125240 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 06:19:01 crc kubenswrapper[5050]: I0131 06:19:01.271567 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-twh2n"] Jan 31 06:19:01 crc kubenswrapper[5050]: I0131 06:19:01.521468 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 06:19:01 crc kubenswrapper[5050]: I0131 06:19:01.726721 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 06:19:01 crc kubenswrapper[5050]: I0131 06:19:01.805216 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8","Type":"ContainerStarted","Data":"70b3a30bf1334f9faf1ece0628c24cfe467da9217333fe8e1c72eedf2dae896e"} Jan 31 06:19:01 crc kubenswrapper[5050]: I0131 06:19:01.807431 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"92840d80-f5bb-4a24-b9d9-95d876fe9bda","Type":"ContainerStarted","Data":"8e2385cf5d21f74d453bc8b54cb37ea74bfc1efdd5c3eb4c087a9df3690ac9da"} Jan 31 06:19:01 crc kubenswrapper[5050]: I0131 06:19:01.820943 5050 generic.go:334] "Generic (PLEG): container finished" podID="2b497e20-3e4d-4df3-9194-f922711eb66c" containerID="2f9e6c1cd2f6f33d6a23acd7a2c074a6eba320452ef26074ea1840279e57dcb9" exitCode=0 Jan 31 06:19:01 crc kubenswrapper[5050]: I0131 06:19:01.821063 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" event={"ID":"2b497e20-3e4d-4df3-9194-f922711eb66c","Type":"ContainerDied","Data":"2f9e6c1cd2f6f33d6a23acd7a2c074a6eba320452ef26074ea1840279e57dcb9"} Jan 31 06:19:01 crc kubenswrapper[5050]: I0131 06:19:01.821099 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" event={"ID":"2b497e20-3e4d-4df3-9194-f922711eb66c","Type":"ContainerStarted","Data":"8a123f71cbfebdf51e37519e7b8282d006a8087587731ad2401770c501cc50c1"} Jan 31 06:19:02 crc kubenswrapper[5050]: W0131 06:19:02.218587 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeeab7c7_5332_4b42_a463_adcaa1751ec3.slice/crio-e3c5f88b036c02d3e8ab3502d407fb59e2c8915afc28e06448cf13c52e61f348 WatchSource:0}: Error finding container e3c5f88b036c02d3e8ab3502d407fb59e2c8915afc28e06448cf13c52e61f348: Status 404 returned error can't find the container with id e3c5f88b036c02d3e8ab3502d407fb59e2c8915afc28e06448cf13c52e61f348 Jan 31 06:19:02 crc kubenswrapper[5050]: I0131 06:19:02.830107 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"beeab7c7-5332-4b42-a463-adcaa1751ec3","Type":"ContainerStarted","Data":"e3c5f88b036c02d3e8ab3502d407fb59e2c8915afc28e06448cf13c52e61f348"} Jan 31 06:19:02 crc kubenswrapper[5050]: I0131 06:19:02.833752 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" event={"ID":"2b497e20-3e4d-4df3-9194-f922711eb66c","Type":"ContainerStarted","Data":"ac08ae720c457116b1c9b2db9d5a544f03d47523bec8cce5cb55531a6982c6aa"} Jan 31 06:19:03 crc kubenswrapper[5050]: I0131 06:19:03.415028 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 31 06:19:03 crc kubenswrapper[5050]: I0131 06:19:03.848361 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8","Type":"ContainerStarted","Data":"dccb2b958a74d37a7b5ecf62d80392d3c08cd04a1db3d4b5e1e1c7d5caa78bd8"} Jan 31 06:19:03 crc kubenswrapper[5050]: I0131 06:19:03.856900 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"beeab7c7-5332-4b42-a463-adcaa1751ec3","Type":"ContainerStarted","Data":"cebf9e9eb0bdc2f92da63972b0ecba73368c596ea0829c94e7402226091ecfbd"} Jan 31 06:19:03 crc kubenswrapper[5050]: I0131 06:19:03.856940 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"beeab7c7-5332-4b42-a463-adcaa1751ec3","Type":"ContainerStarted","Data":"f2a622547ff521888dc2d69bf93cc426fbc550fbb019abb02a8a491369f9ed46"} Jan 31 06:19:03 crc kubenswrapper[5050]: I0131 06:19:03.856996 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:03 crc kubenswrapper[5050]: I0131 06:19:03.879802 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" podStartSLOduration=3.879779478 podStartE2EDuration="3.879779478s" podCreationTimestamp="2026-01-31 06:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:19:03.874051075 +0000 UTC m=+3468.923212681" watchObservedRunningTime="2026-01-31 06:19:03.879779478 +0000 UTC m=+3468.928941074" Jan 31 06:19:04 crc kubenswrapper[5050]: I0131 06:19:04.876342 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8","Type":"ContainerStarted","Data":"b8d93fe9e9cfc3ab45bb138ee05600764a5e134b168b12dd80984d946d915738"} Jan 31 06:19:04 crc kubenswrapper[5050]: I0131 06:19:04.876742 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 31 06:19:04 crc kubenswrapper[5050]: I0131 06:19:04.876736 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="beeab7c7-5332-4b42-a463-adcaa1751ec3" containerName="manila-api-log" containerID="cri-o://f2a622547ff521888dc2d69bf93cc426fbc550fbb019abb02a8a491369f9ed46" gracePeriod=30 Jan 31 06:19:04 crc kubenswrapper[5050]: I0131 06:19:04.876808 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="beeab7c7-5332-4b42-a463-adcaa1751ec3" containerName="manila-api" containerID="cri-o://cebf9e9eb0bdc2f92da63972b0ecba73368c596ea0829c94e7402226091ecfbd" gracePeriod=30 Jan 31 06:19:04 crc kubenswrapper[5050]: I0131 06:19:04.915116 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.991066333 podStartE2EDuration="4.91509073s" podCreationTimestamp="2026-01-31 06:19:00 +0000 UTC" firstStartedPulling="2026-01-31 06:19:01.125009987 +0000 UTC m=+3466.174171583" lastFinishedPulling="2026-01-31 06:19:03.049034384 +0000 UTC m=+3468.098195980" observedRunningTime="2026-01-31 06:19:04.901188538 +0000 UTC m=+3469.950350134" watchObservedRunningTime="2026-01-31 06:19:04.91509073 +0000 UTC m=+3469.964252326" Jan 31 06:19:04 crc kubenswrapper[5050]: I0131 06:19:04.935902 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.935883436 podStartE2EDuration="4.935883436s" podCreationTimestamp="2026-01-31 06:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:19:04.934829947 +0000 UTC m=+3469.983991553" watchObservedRunningTime="2026-01-31 06:19:04.935883436 +0000 UTC m=+3469.985045032" Jan 31 06:19:05 crc kubenswrapper[5050]: I0131 06:19:05.975508 5050 generic.go:334] "Generic (PLEG): container finished" podID="beeab7c7-5332-4b42-a463-adcaa1751ec3" containerID="cebf9e9eb0bdc2f92da63972b0ecba73368c596ea0829c94e7402226091ecfbd" exitCode=0 Jan 31 06:19:05 crc kubenswrapper[5050]: I0131 06:19:05.975549 5050 generic.go:334] "Generic (PLEG): container finished" podID="beeab7c7-5332-4b42-a463-adcaa1751ec3" containerID="f2a622547ff521888dc2d69bf93cc426fbc550fbb019abb02a8a491369f9ed46" exitCode=143 Jan 31 06:19:05 crc kubenswrapper[5050]: I0131 06:19:05.976793 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"beeab7c7-5332-4b42-a463-adcaa1751ec3","Type":"ContainerDied","Data":"cebf9e9eb0bdc2f92da63972b0ecba73368c596ea0829c94e7402226091ecfbd"} Jan 31 06:19:05 crc kubenswrapper[5050]: I0131 06:19:05.976836 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"beeab7c7-5332-4b42-a463-adcaa1751ec3","Type":"ContainerDied","Data":"f2a622547ff521888dc2d69bf93cc426fbc550fbb019abb02a8a491369f9ed46"} Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.577309 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.700314 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-scripts\") pod \"beeab7c7-5332-4b42-a463-adcaa1751ec3\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.700713 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-config-data-custom\") pod \"beeab7c7-5332-4b42-a463-adcaa1751ec3\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.700778 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/beeab7c7-5332-4b42-a463-adcaa1751ec3-etc-machine-id\") pod \"beeab7c7-5332-4b42-a463-adcaa1751ec3\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.700805 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-combined-ca-bundle\") pod \"beeab7c7-5332-4b42-a463-adcaa1751ec3\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.700874 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beeab7c7-5332-4b42-a463-adcaa1751ec3-logs\") pod \"beeab7c7-5332-4b42-a463-adcaa1751ec3\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.700890 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqh99\" (UniqueName: \"kubernetes.io/projected/beeab7c7-5332-4b42-a463-adcaa1751ec3-kube-api-access-wqh99\") pod \"beeab7c7-5332-4b42-a463-adcaa1751ec3\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.700977 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-config-data\") pod \"beeab7c7-5332-4b42-a463-adcaa1751ec3\" (UID: \"beeab7c7-5332-4b42-a463-adcaa1751ec3\") " Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.702928 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beeab7c7-5332-4b42-a463-adcaa1751ec3-logs" (OuterVolumeSpecName: "logs") pod "beeab7c7-5332-4b42-a463-adcaa1751ec3" (UID: "beeab7c7-5332-4b42-a463-adcaa1751ec3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.703439 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beeab7c7-5332-4b42-a463-adcaa1751ec3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "beeab7c7-5332-4b42-a463-adcaa1751ec3" (UID: "beeab7c7-5332-4b42-a463-adcaa1751ec3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.710091 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beeab7c7-5332-4b42-a463-adcaa1751ec3-kube-api-access-wqh99" (OuterVolumeSpecName: "kube-api-access-wqh99") pod "beeab7c7-5332-4b42-a463-adcaa1751ec3" (UID: "beeab7c7-5332-4b42-a463-adcaa1751ec3"). InnerVolumeSpecName "kube-api-access-wqh99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.729196 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "beeab7c7-5332-4b42-a463-adcaa1751ec3" (UID: "beeab7c7-5332-4b42-a463-adcaa1751ec3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.729821 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-scripts" (OuterVolumeSpecName: "scripts") pod "beeab7c7-5332-4b42-a463-adcaa1751ec3" (UID: "beeab7c7-5332-4b42-a463-adcaa1751ec3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.736891 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "beeab7c7-5332-4b42-a463-adcaa1751ec3" (UID: "beeab7c7-5332-4b42-a463-adcaa1751ec3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.797093 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-config-data" (OuterVolumeSpecName: "config-data") pod "beeab7c7-5332-4b42-a463-adcaa1751ec3" (UID: "beeab7c7-5332-4b42-a463-adcaa1751ec3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.803011 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.803049 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.803061 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/beeab7c7-5332-4b42-a463-adcaa1751ec3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.803072 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.803085 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beeab7c7-5332-4b42-a463-adcaa1751ec3-logs\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.803097 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqh99\" (UniqueName: \"kubernetes.io/projected/beeab7c7-5332-4b42-a463-adcaa1751ec3-kube-api-access-wqh99\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.803108 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beeab7c7-5332-4b42-a463-adcaa1751ec3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.991868 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"beeab7c7-5332-4b42-a463-adcaa1751ec3","Type":"ContainerDied","Data":"e3c5f88b036c02d3e8ab3502d407fb59e2c8915afc28e06448cf13c52e61f348"} Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.991922 5050 scope.go:117] "RemoveContainer" containerID="cebf9e9eb0bdc2f92da63972b0ecba73368c596ea0829c94e7402226091ecfbd" Jan 31 06:19:06 crc kubenswrapper[5050]: I0131 06:19:06.991988 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.051362 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.066818 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.078467 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 31 06:19:07 crc kubenswrapper[5050]: E0131 06:19:07.080940 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beeab7c7-5332-4b42-a463-adcaa1751ec3" containerName="manila-api" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.081003 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="beeab7c7-5332-4b42-a463-adcaa1751ec3" containerName="manila-api" Jan 31 06:19:07 crc kubenswrapper[5050]: E0131 06:19:07.081053 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beeab7c7-5332-4b42-a463-adcaa1751ec3" containerName="manila-api-log" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.081063 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="beeab7c7-5332-4b42-a463-adcaa1751ec3" containerName="manila-api-log" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.081318 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="beeab7c7-5332-4b42-a463-adcaa1751ec3" containerName="manila-api-log" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.081355 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="beeab7c7-5332-4b42-a463-adcaa1751ec3" containerName="manila-api" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.082691 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.085855 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.086431 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.091297 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.099598 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.211391 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr2gl\" (UniqueName: \"kubernetes.io/projected/2b04e497-f938-4b7b-acbc-372819f1b1db-kube-api-access-jr2gl\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.211448 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.211519 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-public-tls-certs\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.211571 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-config-data-custom\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.211618 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-config-data\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.211639 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-internal-tls-certs\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.211658 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b04e497-f938-4b7b-acbc-372819f1b1db-etc-machine-id\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.211817 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-scripts\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.211946 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b04e497-f938-4b7b-acbc-372819f1b1db-logs\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.314582 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr2gl\" (UniqueName: \"kubernetes.io/projected/2b04e497-f938-4b7b-acbc-372819f1b1db-kube-api-access-jr2gl\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.314937 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.314992 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-public-tls-certs\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.315035 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-config-data-custom\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.315072 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-config-data\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.315100 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-internal-tls-certs\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.315126 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b04e497-f938-4b7b-acbc-372819f1b1db-etc-machine-id\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.315168 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-scripts\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.315187 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b04e497-f938-4b7b-acbc-372819f1b1db-logs\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.315870 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b04e497-f938-4b7b-acbc-372819f1b1db-logs\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.316583 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b04e497-f938-4b7b-acbc-372819f1b1db-etc-machine-id\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.319401 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.320128 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-config-data-custom\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.321381 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-public-tls-certs\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.322421 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-config-data\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.322485 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-internal-tls-certs\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.331118 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b04e497-f938-4b7b-acbc-372819f1b1db-scripts\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.333659 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr2gl\" (UniqueName: \"kubernetes.io/projected/2b04e497-f938-4b7b-acbc-372819f1b1db-kube-api-access-jr2gl\") pod \"manila-api-0\" (UID: \"2b04e497-f938-4b7b-acbc-372819f1b1db\") " pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.441763 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.755901 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beeab7c7-5332-4b42-a463-adcaa1751ec3" path="/var/lib/kubelet/pods/beeab7c7-5332-4b42-a463-adcaa1751ec3/volumes" Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.869120 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.869699 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="ceilometer-central-agent" containerID="cri-o://a5c4ed3b579d84e9770e2b8bd8b68b9756a8b388edf140f586cd133278b394da" gracePeriod=30 Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.869736 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="sg-core" containerID="cri-o://ca9d2e6875b4832aec78487e11aed648cef2a19f2ef7a51a981f28bbb5201fc5" gracePeriod=30 Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.869851 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="ceilometer-notification-agent" containerID="cri-o://81ac2632b609615b7cf8f0187811df728517e6c5f5ffff109bbad6d55ce84122" gracePeriod=30 Jan 31 06:19:07 crc kubenswrapper[5050]: I0131 06:19:07.869915 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="proxy-httpd" containerID="cri-o://0c5041d5bdc281e96a441e39025a784e965100cf5a1c16af9bb04202a2f680d1" gracePeriod=30 Jan 31 06:19:08 crc kubenswrapper[5050]: I0131 06:19:08.015689 5050 generic.go:334] "Generic (PLEG): container finished" podID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerID="ca9d2e6875b4832aec78487e11aed648cef2a19f2ef7a51a981f28bbb5201fc5" exitCode=2 Jan 31 06:19:08 crc kubenswrapper[5050]: I0131 06:19:08.015760 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2fd8d9-2a70-45bd-a0bc-02638aa83992","Type":"ContainerDied","Data":"ca9d2e6875b4832aec78487e11aed648cef2a19f2ef7a51a981f28bbb5201fc5"} Jan 31 06:19:09 crc kubenswrapper[5050]: I0131 06:19:09.018476 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:19:09 crc kubenswrapper[5050]: I0131 06:19:09.018553 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:19:09 crc kubenswrapper[5050]: I0131 06:19:09.018790 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 06:19:09 crc kubenswrapper[5050]: I0131 06:19:09.019825 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94806851aee0134f3d90df4319b62fedbb74408bf4a52f75abe44a79e6de8a38"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 06:19:09 crc kubenswrapper[5050]: I0131 06:19:09.019902 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://94806851aee0134f3d90df4319b62fedbb74408bf4a52f75abe44a79e6de8a38" gracePeriod=600 Jan 31 06:19:09 crc kubenswrapper[5050]: I0131 06:19:09.033821 5050 generic.go:334] "Generic (PLEG): container finished" podID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerID="0c5041d5bdc281e96a441e39025a784e965100cf5a1c16af9bb04202a2f680d1" exitCode=0 Jan 31 06:19:09 crc kubenswrapper[5050]: I0131 06:19:09.033868 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2fd8d9-2a70-45bd-a0bc-02638aa83992","Type":"ContainerDied","Data":"0c5041d5bdc281e96a441e39025a784e965100cf5a1c16af9bb04202a2f680d1"} Jan 31 06:19:09 crc kubenswrapper[5050]: I0131 06:19:09.511590 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.187:3000/\": dial tcp 10.217.0.187:3000: connect: connection refused" Jan 31 06:19:10 crc kubenswrapper[5050]: I0131 06:19:10.046282 5050 generic.go:334] "Generic (PLEG): container finished" podID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerID="a5c4ed3b579d84e9770e2b8bd8b68b9756a8b388edf140f586cd133278b394da" exitCode=0 Jan 31 06:19:10 crc kubenswrapper[5050]: I0131 06:19:10.046355 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2fd8d9-2a70-45bd-a0bc-02638aa83992","Type":"ContainerDied","Data":"a5c4ed3b579d84e9770e2b8bd8b68b9756a8b388edf140f586cd133278b394da"} Jan 31 06:19:10 crc kubenswrapper[5050]: I0131 06:19:10.496661 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 31 06:19:10 crc kubenswrapper[5050]: I0131 06:19:10.520127 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-twh2n" Jan 31 06:19:10 crc kubenswrapper[5050]: I0131 06:19:10.579387 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-kv8pz"] Jan 31 06:19:10 crc kubenswrapper[5050]: I0131 06:19:10.580349 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" podUID="eea77a53-6357-4243-b7bd-5b98e5f15146" containerName="dnsmasq-dns" containerID="cri-o://06e3203d9a7111e19cc03e440f770c3624830f77dea863981243d5ba822d346c" gracePeriod=10 Jan 31 06:19:11 crc kubenswrapper[5050]: I0131 06:19:11.057888 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="94806851aee0134f3d90df4319b62fedbb74408bf4a52f75abe44a79e6de8a38" exitCode=0 Jan 31 06:19:11 crc kubenswrapper[5050]: I0131 06:19:11.057968 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"94806851aee0134f3d90df4319b62fedbb74408bf4a52f75abe44a79e6de8a38"} Jan 31 06:19:11 crc kubenswrapper[5050]: I0131 06:19:11.060242 5050 generic.go:334] "Generic (PLEG): container finished" podID="eea77a53-6357-4243-b7bd-5b98e5f15146" containerID="06e3203d9a7111e19cc03e440f770c3624830f77dea863981243d5ba822d346c" exitCode=0 Jan 31 06:19:11 crc kubenswrapper[5050]: I0131 06:19:11.060285 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" event={"ID":"eea77a53-6357-4243-b7bd-5b98e5f15146","Type":"ContainerDied","Data":"06e3203d9a7111e19cc03e440f770c3624830f77dea863981243d5ba822d346c"} Jan 31 06:19:11 crc kubenswrapper[5050]: I0131 06:19:11.292871 5050 scope.go:117] "RemoveContainer" containerID="f2a622547ff521888dc2d69bf93cc426fbc550fbb019abb02a8a491369f9ed46" Jan 31 06:19:11 crc kubenswrapper[5050]: I0131 06:19:11.429671 5050 scope.go:117] "RemoveContainer" containerID="83128b5a280dbb6737492e5acb2a5690502cfddf25b1d1629c506c8206ca4400" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:11.910742 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 31 06:19:14 crc kubenswrapper[5050]: W0131 06:19:11.918174 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b04e497_f938_4b7b_acbc_372819f1b1db.slice/crio-43a9177ceb8bb6808f4b668d1cdcbb11d7b9ede00f3553dbc77e57fabac4af9c WatchSource:0}: Error finding container 43a9177ceb8bb6808f4b668d1cdcbb11d7b9ede00f3553dbc77e57fabac4af9c: Status 404 returned error can't find the container with id 43a9177ceb8bb6808f4b668d1cdcbb11d7b9ede00f3553dbc77e57fabac4af9c Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.025684 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.091127 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" event={"ID":"eea77a53-6357-4243-b7bd-5b98e5f15146","Type":"ContainerDied","Data":"ea402798c33076e2d2b1dacf20bed67d766a3cce4c90254dfd401e373a3b46a4"} Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.091227 5050 scope.go:117] "RemoveContainer" containerID="06e3203d9a7111e19cc03e440f770c3624830f77dea863981243d5ba822d346c" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.091404 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-kv8pz" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.100622 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a"} Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.116925 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-dns-svc\") pod \"eea77a53-6357-4243-b7bd-5b98e5f15146\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.116993 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-openstack-edpm-ipam\") pod \"eea77a53-6357-4243-b7bd-5b98e5f15146\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.117056 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-config\") pod \"eea77a53-6357-4243-b7bd-5b98e5f15146\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.117228 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-ovsdbserver-sb\") pod \"eea77a53-6357-4243-b7bd-5b98e5f15146\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.117262 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjcd6\" (UniqueName: \"kubernetes.io/projected/eea77a53-6357-4243-b7bd-5b98e5f15146-kube-api-access-wjcd6\") pod \"eea77a53-6357-4243-b7bd-5b98e5f15146\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.117314 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-ovsdbserver-nb\") pod \"eea77a53-6357-4243-b7bd-5b98e5f15146\" (UID: \"eea77a53-6357-4243-b7bd-5b98e5f15146\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.121602 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2b04e497-f938-4b7b-acbc-372819f1b1db","Type":"ContainerStarted","Data":"43a9177ceb8bb6808f4b668d1cdcbb11d7b9ede00f3553dbc77e57fabac4af9c"} Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.129105 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea77a53-6357-4243-b7bd-5b98e5f15146-kube-api-access-wjcd6" (OuterVolumeSpecName: "kube-api-access-wjcd6") pod "eea77a53-6357-4243-b7bd-5b98e5f15146" (UID: "eea77a53-6357-4243-b7bd-5b98e5f15146"). InnerVolumeSpecName "kube-api-access-wjcd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.186940 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eea77a53-6357-4243-b7bd-5b98e5f15146" (UID: "eea77a53-6357-4243-b7bd-5b98e5f15146"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.190734 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "eea77a53-6357-4243-b7bd-5b98e5f15146" (UID: "eea77a53-6357-4243-b7bd-5b98e5f15146"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.196423 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-config" (OuterVolumeSpecName: "config") pod "eea77a53-6357-4243-b7bd-5b98e5f15146" (UID: "eea77a53-6357-4243-b7bd-5b98e5f15146"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.217025 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eea77a53-6357-4243-b7bd-5b98e5f15146" (UID: "eea77a53-6357-4243-b7bd-5b98e5f15146"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.220685 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.220713 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjcd6\" (UniqueName: \"kubernetes.io/projected/eea77a53-6357-4243-b7bd-5b98e5f15146-kube-api-access-wjcd6\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.220726 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.220734 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.220745 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.223999 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eea77a53-6357-4243-b7bd-5b98e5f15146" (UID: "eea77a53-6357-4243-b7bd-5b98e5f15146"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.303762 5050 scope.go:117] "RemoveContainer" containerID="e331e53cc62a2a2522be9c2b05c0fd6c86695366830e1ae9c3a694a8f56d96d3" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.322699 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea77a53-6357-4243-b7bd-5b98e5f15146-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.448928 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-kv8pz"] Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:12.461376 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-kv8pz"] Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:13.166007 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2b04e497-f938-4b7b-acbc-372819f1b1db","Type":"ContainerStarted","Data":"2c88fec490687d88e59d97008aecc5f41ba6acaef4b9c0b2df79b5413396492a"} Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:13.169615 5050 generic.go:334] "Generic (PLEG): container finished" podID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerID="81ac2632b609615b7cf8f0187811df728517e6c5f5ffff109bbad6d55ce84122" exitCode=0 Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:13.169683 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2fd8d9-2a70-45bd-a0bc-02638aa83992","Type":"ContainerDied","Data":"81ac2632b609615b7cf8f0187811df728517e6c5f5ffff109bbad6d55ce84122"} Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:13.754854 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea77a53-6357-4243-b7bd-5b98e5f15146" path="/var/lib/kubelet/pods/eea77a53-6357-4243-b7bd-5b98e5f15146/volumes" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.200587 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2b04e497-f938-4b7b-acbc-372819f1b1db","Type":"ContainerStarted","Data":"6f0a7b2e72b8be141a6233779103663ebc842bf7261863a4519838ee818e81f8"} Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.201504 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.248397 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=7.248375853 podStartE2EDuration="7.248375853s" podCreationTimestamp="2026-01-31 06:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:19:14.242841305 +0000 UTC m=+3479.292002901" watchObservedRunningTime="2026-01-31 06:19:14.248375853 +0000 UTC m=+3479.297537449" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.824419 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.931326 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgwzh\" (UniqueName: \"kubernetes.io/projected/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-kube-api-access-tgwzh\") pod \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.931894 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-ceilometer-tls-certs\") pod \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.931941 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-combined-ca-bundle\") pod \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.932016 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-config-data\") pod \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.932083 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-scripts\") pod \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.932140 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-log-httpd\") pod \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.932203 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-sg-core-conf-yaml\") pod \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.932254 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-run-httpd\") pod \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\" (UID: \"6c2fd8d9-2a70-45bd-a0bc-02638aa83992\") " Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.933441 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6c2fd8d9-2a70-45bd-a0bc-02638aa83992" (UID: "6c2fd8d9-2a70-45bd-a0bc-02638aa83992"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.933502 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6c2fd8d9-2a70-45bd-a0bc-02638aa83992" (UID: "6c2fd8d9-2a70-45bd-a0bc-02638aa83992"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.946577 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-kube-api-access-tgwzh" (OuterVolumeSpecName: "kube-api-access-tgwzh") pod "6c2fd8d9-2a70-45bd-a0bc-02638aa83992" (UID: "6c2fd8d9-2a70-45bd-a0bc-02638aa83992"). InnerVolumeSpecName "kube-api-access-tgwzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:19:14 crc kubenswrapper[5050]: I0131 06:19:14.946808 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-scripts" (OuterVolumeSpecName: "scripts") pod "6c2fd8d9-2a70-45bd-a0bc-02638aa83992" (UID: "6c2fd8d9-2a70-45bd-a0bc-02638aa83992"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.003795 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6c2fd8d9-2a70-45bd-a0bc-02638aa83992" (UID: "6c2fd8d9-2a70-45bd-a0bc-02638aa83992"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.030823 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6c2fd8d9-2a70-45bd-a0bc-02638aa83992" (UID: "6c2fd8d9-2a70-45bd-a0bc-02638aa83992"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.035142 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgwzh\" (UniqueName: \"kubernetes.io/projected/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-kube-api-access-tgwzh\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.035173 5050 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.035229 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.035246 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.035257 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.035268 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.051911 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c2fd8d9-2a70-45bd-a0bc-02638aa83992" (UID: "6c2fd8d9-2a70-45bd-a0bc-02638aa83992"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.137344 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.153621 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-config-data" (OuterVolumeSpecName: "config-data") pod "6c2fd8d9-2a70-45bd-a0bc-02638aa83992" (UID: "6c2fd8d9-2a70-45bd-a0bc-02638aa83992"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.211611 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"92840d80-f5bb-4a24-b9d9-95d876fe9bda","Type":"ContainerStarted","Data":"d49e2537c5014bb89a539c94dce446ca75da3a0a96854f5a6309e87e344d821b"} Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.215181 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c2fd8d9-2a70-45bd-a0bc-02638aa83992","Type":"ContainerDied","Data":"851a2da23530ba56925ce4bda8fb96f3cba07a5f46c678ccabc9f9f36c4eac29"} Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.215234 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.215244 5050 scope.go:117] "RemoveContainer" containerID="0c5041d5bdc281e96a441e39025a784e965100cf5a1c16af9bb04202a2f680d1" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.240815 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2fd8d9-2a70-45bd-a0bc-02638aa83992-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.259283 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.262220 5050 scope.go:117] "RemoveContainer" containerID="ca9d2e6875b4832aec78487e11aed648cef2a19f2ef7a51a981f28bbb5201fc5" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.268632 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.288191 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 06:19:15 crc kubenswrapper[5050]: E0131 06:19:15.288647 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea77a53-6357-4243-b7bd-5b98e5f15146" containerName="init" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.288668 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea77a53-6357-4243-b7bd-5b98e5f15146" containerName="init" Jan 31 06:19:15 crc kubenswrapper[5050]: E0131 06:19:15.288682 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="sg-core" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.288688 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="sg-core" Jan 31 06:19:15 crc kubenswrapper[5050]: E0131 06:19:15.288717 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="proxy-httpd" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.288817 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="proxy-httpd" Jan 31 06:19:15 crc kubenswrapper[5050]: E0131 06:19:15.288843 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea77a53-6357-4243-b7bd-5b98e5f15146" containerName="dnsmasq-dns" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.288851 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea77a53-6357-4243-b7bd-5b98e5f15146" containerName="dnsmasq-dns" Jan 31 06:19:15 crc kubenswrapper[5050]: E0131 06:19:15.288864 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="ceilometer-notification-agent" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.288873 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="ceilometer-notification-agent" Jan 31 06:19:15 crc kubenswrapper[5050]: E0131 06:19:15.288886 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="ceilometer-central-agent" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.288893 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="ceilometer-central-agent" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.289186 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="sg-core" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.289220 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="ceilometer-notification-agent" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.289246 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="proxy-httpd" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.289258 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea77a53-6357-4243-b7bd-5b98e5f15146" containerName="dnsmasq-dns" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.289269 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" containerName="ceilometer-central-agent" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.292993 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.297497 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.297621 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.297823 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.301476 5050 scope.go:117] "RemoveContainer" containerID="81ac2632b609615b7cf8f0187811df728517e6c5f5ffff109bbad6d55ce84122" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.303463 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.388307 5050 scope.go:117] "RemoveContainer" containerID="a5c4ed3b579d84e9770e2b8bd8b68b9756a8b388edf140f586cd133278b394da" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.446233 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-log-httpd\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.446305 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-config-data\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.446330 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-run-httpd\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.446459 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvxgz\" (UniqueName: \"kubernetes.io/projected/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-kube-api-access-jvxgz\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.446520 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.446544 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.446635 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.446675 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-scripts\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.548211 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.548286 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-scripts\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.548350 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-log-httpd\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.548372 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-config-data\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.548391 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-run-httpd\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.548483 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvxgz\" (UniqueName: \"kubernetes.io/projected/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-kube-api-access-jvxgz\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.548544 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.548571 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.549670 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-run-httpd\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.550089 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-log-httpd\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.556351 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-scripts\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.559862 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.561033 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-config-data\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.562717 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.566707 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.613070 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvxgz\" (UniqueName: \"kubernetes.io/projected/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-kube-api-access-jvxgz\") pod \"ceilometer-0\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.660405 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.781589 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c2fd8d9-2a70-45bd-a0bc-02638aa83992" path="/var/lib/kubelet/pods/6c2fd8d9-2a70-45bd-a0bc-02638aa83992/volumes" Jan 31 06:19:15 crc kubenswrapper[5050]: I0131 06:19:15.894276 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 06:19:16 crc kubenswrapper[5050]: I0131 06:19:16.203836 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 06:19:16 crc kubenswrapper[5050]: I0131 06:19:16.230520 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6","Type":"ContainerStarted","Data":"8c09d3f7b2cd9248c9b9a8567c274655da4cfd740a32139039e7bfdeedc12d4e"} Jan 31 06:19:16 crc kubenswrapper[5050]: I0131 06:19:16.240269 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"92840d80-f5bb-4a24-b9d9-95d876fe9bda","Type":"ContainerStarted","Data":"96b747adab2b593b7695f40a79f38844fda87bf374a2c592a96a0288cea2e39e"} Jan 31 06:19:16 crc kubenswrapper[5050]: I0131 06:19:16.263322 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.150116533 podStartE2EDuration="16.263306759s" podCreationTimestamp="2026-01-31 06:19:00 +0000 UTC" firstStartedPulling="2026-01-31 06:19:01.529654383 +0000 UTC m=+3466.578815979" lastFinishedPulling="2026-01-31 06:19:13.642844609 +0000 UTC m=+3478.692006205" observedRunningTime="2026-01-31 06:19:16.262379935 +0000 UTC m=+3481.311541531" watchObservedRunningTime="2026-01-31 06:19:16.263306759 +0000 UTC m=+3481.312468355" Jan 31 06:19:20 crc kubenswrapper[5050]: I0131 06:19:20.604745 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 31 06:19:22 crc kubenswrapper[5050]: I0131 06:19:22.282094 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 31 06:19:22 crc kubenswrapper[5050]: I0131 06:19:22.378833 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 06:19:22 crc kubenswrapper[5050]: I0131 06:19:22.379127 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" containerName="manila-scheduler" containerID="cri-o://dccb2b958a74d37a7b5ecf62d80392d3c08cd04a1db3d4b5e1e1c7d5caa78bd8" gracePeriod=30 Jan 31 06:19:22 crc kubenswrapper[5050]: I0131 06:19:22.379166 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" containerName="probe" containerID="cri-o://b8d93fe9e9cfc3ab45bb138ee05600764a5e134b168b12dd80984d946d915738" gracePeriod=30 Jan 31 06:19:24 crc kubenswrapper[5050]: I0131 06:19:24.384944 5050 generic.go:334] "Generic (PLEG): container finished" podID="cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" containerID="b8d93fe9e9cfc3ab45bb138ee05600764a5e134b168b12dd80984d946d915738" exitCode=0 Jan 31 06:19:24 crc kubenswrapper[5050]: I0131 06:19:24.385060 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8","Type":"ContainerDied","Data":"b8d93fe9e9cfc3ab45bb138ee05600764a5e134b168b12dd80984d946d915738"} Jan 31 06:19:26 crc kubenswrapper[5050]: I0131 06:19:26.406039 5050 generic.go:334] "Generic (PLEG): container finished" podID="cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" containerID="dccb2b958a74d37a7b5ecf62d80392d3c08cd04a1db3d4b5e1e1c7d5caa78bd8" exitCode=0 Jan 31 06:19:26 crc kubenswrapper[5050]: I0131 06:19:26.406094 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8","Type":"ContainerDied","Data":"dccb2b958a74d37a7b5ecf62d80392d3c08cd04a1db3d4b5e1e1c7d5caa78bd8"} Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.146900 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.254233 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5njc\" (UniqueName: \"kubernetes.io/projected/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-kube-api-access-n5njc\") pod \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.254361 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-combined-ca-bundle\") pod \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.254446 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-config-data\") pod \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.254478 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-config-data-custom\") pod \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.254506 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-scripts\") pod \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.254643 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-etc-machine-id\") pod \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\" (UID: \"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8\") " Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.255210 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" (UID: "cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.277406 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" (UID: "cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.280111 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-scripts" (OuterVolumeSpecName: "scripts") pod "cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" (UID: "cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.280313 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-kube-api-access-n5njc" (OuterVolumeSpecName: "kube-api-access-n5njc") pod "cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" (UID: "cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8"). InnerVolumeSpecName "kube-api-access-n5njc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.336259 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" (UID: "cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.357657 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.357715 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.357730 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.357748 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.357766 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5njc\" (UniqueName: \"kubernetes.io/projected/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-kube-api-access-n5njc\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.395118 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-config-data" (OuterVolumeSpecName: "config-data") pod "cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" (UID: "cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.416042 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6","Type":"ContainerStarted","Data":"ef922b1fb7fcc0e456ae553de2554cfa170187e29930ab07da4a0f2ce617ac69"} Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.418002 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8","Type":"ContainerDied","Data":"70b3a30bf1334f9faf1ece0628c24cfe467da9217333fe8e1c72eedf2dae896e"} Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.418044 5050 scope.go:117] "RemoveContainer" containerID="b8d93fe9e9cfc3ab45bb138ee05600764a5e134b168b12dd80984d946d915738" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.418196 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.460383 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.464694 5050 scope.go:117] "RemoveContainer" containerID="dccb2b958a74d37a7b5ecf62d80392d3c08cd04a1db3d4b5e1e1c7d5caa78bd8" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.480285 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.494008 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.505411 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 06:19:27 crc kubenswrapper[5050]: E0131 06:19:27.505931 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" containerName="probe" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.505959 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" containerName="probe" Jan 31 06:19:27 crc kubenswrapper[5050]: E0131 06:19:27.505990 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" containerName="manila-scheduler" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.505999 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" containerName="manila-scheduler" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.506287 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" containerName="probe" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.506318 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" containerName="manila-scheduler" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.507698 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.514201 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.523880 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.563331 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.563382 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.563456 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.563493 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-scripts\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.563538 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-config-data\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.563712 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggbxr\" (UniqueName: \"kubernetes.io/projected/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-kube-api-access-ggbxr\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.665491 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-scripts\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.665563 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-config-data\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.665594 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggbxr\" (UniqueName: \"kubernetes.io/projected/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-kube-api-access-ggbxr\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.665655 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.665679 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.665733 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.666544 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.671126 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-scripts\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.671279 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.672209 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-config-data\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.672249 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.684509 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggbxr\" (UniqueName: \"kubernetes.io/projected/ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c-kube-api-access-ggbxr\") pod \"manila-scheduler-0\" (UID: \"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c\") " pod="openstack/manila-scheduler-0" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.760773 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8" path="/var/lib/kubelet/pods/cc4e63a5-d8ad-45a0-ae7a-c6df222a96e8/volumes" Jan 31 06:19:27 crc kubenswrapper[5050]: I0131 06:19:27.825699 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 31 06:19:28 crc kubenswrapper[5050]: I0131 06:19:28.256090 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 31 06:19:28 crc kubenswrapper[5050]: I0131 06:19:28.428212 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c","Type":"ContainerStarted","Data":"d818a7ca132dadefd541a4b669734313784e8a89bf3f2eed9535c8b44af76a9d"} Jan 31 06:19:29 crc kubenswrapper[5050]: I0131 06:19:29.440594 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c","Type":"ContainerStarted","Data":"d68949d4244ebcd4cdc776eb5e19e51b0be9506e1e798f24581725f6cbca7e29"} Jan 31 06:19:32 crc kubenswrapper[5050]: I0131 06:19:32.371615 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 31 06:19:32 crc kubenswrapper[5050]: I0131 06:19:32.491943 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 31 06:19:32 crc kubenswrapper[5050]: I0131 06:19:32.584587 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 06:19:33 crc kubenswrapper[5050]: I0131 06:19:33.490045 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6","Type":"ContainerStarted","Data":"da631d87513d73c2b380be06e3f65c63588b7debbde7b747b1ae77d662552a30"} Jan 31 06:19:33 crc kubenswrapper[5050]: I0131 06:19:33.491805 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c","Type":"ContainerStarted","Data":"7f4dab418bcb2008d6d1f3618f6fbc5922e7f0cff38a4fb21c293440ba002709"} Jan 31 06:19:33 crc kubenswrapper[5050]: I0131 06:19:33.492043 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="92840d80-f5bb-4a24-b9d9-95d876fe9bda" containerName="manila-share" containerID="cri-o://d49e2537c5014bb89a539c94dce446ca75da3a0a96854f5a6309e87e344d821b" gracePeriod=30 Jan 31 06:19:33 crc kubenswrapper[5050]: I0131 06:19:33.492086 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="92840d80-f5bb-4a24-b9d9-95d876fe9bda" containerName="probe" containerID="cri-o://96b747adab2b593b7695f40a79f38844fda87bf374a2c592a96a0288cea2e39e" gracePeriod=30 Jan 31 06:19:34 crc kubenswrapper[5050]: I0131 06:19:34.533046 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=7.5330250979999995 podStartE2EDuration="7.533025098s" podCreationTimestamp="2026-01-31 06:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:19:34.528190339 +0000 UTC m=+3499.577351935" watchObservedRunningTime="2026-01-31 06:19:34.533025098 +0000 UTC m=+3499.582186694" Jan 31 06:19:35 crc kubenswrapper[5050]: I0131 06:19:35.512686 5050 generic.go:334] "Generic (PLEG): container finished" podID="92840d80-f5bb-4a24-b9d9-95d876fe9bda" containerID="96b747adab2b593b7695f40a79f38844fda87bf374a2c592a96a0288cea2e39e" exitCode=0 Jan 31 06:19:35 crc kubenswrapper[5050]: I0131 06:19:35.512750 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"92840d80-f5bb-4a24-b9d9-95d876fe9bda","Type":"ContainerDied","Data":"96b747adab2b593b7695f40a79f38844fda87bf374a2c592a96a0288cea2e39e"} Jan 31 06:19:36 crc kubenswrapper[5050]: I0131 06:19:36.525021 5050 generic.go:334] "Generic (PLEG): container finished" podID="92840d80-f5bb-4a24-b9d9-95d876fe9bda" containerID="d49e2537c5014bb89a539c94dce446ca75da3a0a96854f5a6309e87e344d821b" exitCode=1 Jan 31 06:19:36 crc kubenswrapper[5050]: I0131 06:19:36.525104 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"92840d80-f5bb-4a24-b9d9-95d876fe9bda","Type":"ContainerDied","Data":"d49e2537c5014bb89a539c94dce446ca75da3a0a96854f5a6309e87e344d821b"} Jan 31 06:19:37 crc kubenswrapper[5050]: I0131 06:19:37.827084 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.049764 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.212635 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-config-data\") pod \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.213245 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/92840d80-f5bb-4a24-b9d9-95d876fe9bda-ceph\") pod \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.213314 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/92840d80-f5bb-4a24-b9d9-95d876fe9bda-var-lib-manila\") pod \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.213450 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-scripts\") pod \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.213534 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdmtl\" (UniqueName: \"kubernetes.io/projected/92840d80-f5bb-4a24-b9d9-95d876fe9bda-kube-api-access-hdmtl\") pod \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.213574 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-combined-ca-bundle\") pod \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.213600 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92840d80-f5bb-4a24-b9d9-95d876fe9bda-etc-machine-id\") pod \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.213651 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-config-data-custom\") pod \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\" (UID: \"92840d80-f5bb-4a24-b9d9-95d876fe9bda\") " Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.214187 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92840d80-f5bb-4a24-b9d9-95d876fe9bda-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "92840d80-f5bb-4a24-b9d9-95d876fe9bda" (UID: "92840d80-f5bb-4a24-b9d9-95d876fe9bda"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.214265 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92840d80-f5bb-4a24-b9d9-95d876fe9bda-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "92840d80-f5bb-4a24-b9d9-95d876fe9bda" (UID: "92840d80-f5bb-4a24-b9d9-95d876fe9bda"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.218794 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-scripts" (OuterVolumeSpecName: "scripts") pod "92840d80-f5bb-4a24-b9d9-95d876fe9bda" (UID: "92840d80-f5bb-4a24-b9d9-95d876fe9bda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.220579 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92840d80-f5bb-4a24-b9d9-95d876fe9bda-ceph" (OuterVolumeSpecName: "ceph") pod "92840d80-f5bb-4a24-b9d9-95d876fe9bda" (UID: "92840d80-f5bb-4a24-b9d9-95d876fe9bda"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.221034 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92840d80-f5bb-4a24-b9d9-95d876fe9bda-kube-api-access-hdmtl" (OuterVolumeSpecName: "kube-api-access-hdmtl") pod "92840d80-f5bb-4a24-b9d9-95d876fe9bda" (UID: "92840d80-f5bb-4a24-b9d9-95d876fe9bda"). InnerVolumeSpecName "kube-api-access-hdmtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.228204 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "92840d80-f5bb-4a24-b9d9-95d876fe9bda" (UID: "92840d80-f5bb-4a24-b9d9-95d876fe9bda"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.301092 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92840d80-f5bb-4a24-b9d9-95d876fe9bda" (UID: "92840d80-f5bb-4a24-b9d9-95d876fe9bda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.316352 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.316437 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdmtl\" (UniqueName: \"kubernetes.io/projected/92840d80-f5bb-4a24-b9d9-95d876fe9bda-kube-api-access-hdmtl\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.316453 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92840d80-f5bb-4a24-b9d9-95d876fe9bda-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.316464 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.316476 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.316528 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/92840d80-f5bb-4a24-b9d9-95d876fe9bda-ceph\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.316538 5050 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/92840d80-f5bb-4a24-b9d9-95d876fe9bda-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.346341 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-config-data" (OuterVolumeSpecName: "config-data") pod "92840d80-f5bb-4a24-b9d9-95d876fe9bda" (UID: "92840d80-f5bb-4a24-b9d9-95d876fe9bda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.418616 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92840d80-f5bb-4a24-b9d9-95d876fe9bda-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.547210 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6","Type":"ContainerStarted","Data":"7f9da5f87dbea3b7b15525a7af658bbc7a6d8b03511df3cb273a14dd0e3447fb"} Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.550737 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"92840d80-f5bb-4a24-b9d9-95d876fe9bda","Type":"ContainerDied","Data":"8e2385cf5d21f74d453bc8b54cb37ea74bfc1efdd5c3eb4c087a9df3690ac9da"} Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.550806 5050 scope.go:117] "RemoveContainer" containerID="96b747adab2b593b7695f40a79f38844fda87bf374a2c592a96a0288cea2e39e" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.551073 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.600321 5050 scope.go:117] "RemoveContainer" containerID="d49e2537c5014bb89a539c94dce446ca75da3a0a96854f5a6309e87e344d821b" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.601335 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.610826 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.639718 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 06:19:38 crc kubenswrapper[5050]: E0131 06:19:38.642734 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92840d80-f5bb-4a24-b9d9-95d876fe9bda" containerName="manila-share" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.642768 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="92840d80-f5bb-4a24-b9d9-95d876fe9bda" containerName="manila-share" Jan 31 06:19:38 crc kubenswrapper[5050]: E0131 06:19:38.642814 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92840d80-f5bb-4a24-b9d9-95d876fe9bda" containerName="probe" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.642822 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="92840d80-f5bb-4a24-b9d9-95d876fe9bda" containerName="probe" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.643075 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="92840d80-f5bb-4a24-b9d9-95d876fe9bda" containerName="manila-share" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.643096 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="92840d80-f5bb-4a24-b9d9-95d876fe9bda" containerName="probe" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.644099 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.648498 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.652150 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.827575 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e0474f-b8df-4860-80ad-e852d72f4071-config-data\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.827647 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f9e0474f-b8df-4860-80ad-e852d72f4071-ceph\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.827677 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e0474f-b8df-4860-80ad-e852d72f4071-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.827698 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9e0474f-b8df-4860-80ad-e852d72f4071-scripts\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.827896 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f9e0474f-b8df-4860-80ad-e852d72f4071-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.828095 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9e0474f-b8df-4860-80ad-e852d72f4071-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.828229 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f9e0474f-b8df-4860-80ad-e852d72f4071-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.828458 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2wzw\" (UniqueName: \"kubernetes.io/projected/f9e0474f-b8df-4860-80ad-e852d72f4071-kube-api-access-x2wzw\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.930094 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e0474f-b8df-4860-80ad-e852d72f4071-config-data\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.930935 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f9e0474f-b8df-4860-80ad-e852d72f4071-ceph\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.930999 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e0474f-b8df-4860-80ad-e852d72f4071-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.931022 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9e0474f-b8df-4860-80ad-e852d72f4071-scripts\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.931052 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f9e0474f-b8df-4860-80ad-e852d72f4071-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.931175 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9e0474f-b8df-4860-80ad-e852d72f4071-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.931224 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f9e0474f-b8df-4860-80ad-e852d72f4071-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.931494 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2wzw\" (UniqueName: \"kubernetes.io/projected/f9e0474f-b8df-4860-80ad-e852d72f4071-kube-api-access-x2wzw\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.931534 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f9e0474f-b8df-4860-80ad-e852d72f4071-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.931855 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f9e0474f-b8df-4860-80ad-e852d72f4071-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.934320 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f9e0474f-b8df-4860-80ad-e852d72f4071-ceph\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.934875 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e0474f-b8df-4860-80ad-e852d72f4071-config-data\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.935089 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9e0474f-b8df-4860-80ad-e852d72f4071-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.936416 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e0474f-b8df-4860-80ad-e852d72f4071-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.937249 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9e0474f-b8df-4860-80ad-e852d72f4071-scripts\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.952675 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2wzw\" (UniqueName: \"kubernetes.io/projected/f9e0474f-b8df-4860-80ad-e852d72f4071-kube-api-access-x2wzw\") pod \"manila-share-share1-0\" (UID: \"f9e0474f-b8df-4860-80ad-e852d72f4071\") " pod="openstack/manila-share-share1-0" Jan 31 06:19:38 crc kubenswrapper[5050]: I0131 06:19:38.975481 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 31 06:19:39 crc kubenswrapper[5050]: I0131 06:19:39.606786 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 31 06:19:39 crc kubenswrapper[5050]: I0131 06:19:39.754194 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92840d80-f5bb-4a24-b9d9-95d876fe9bda" path="/var/lib/kubelet/pods/92840d80-f5bb-4a24-b9d9-95d876fe9bda/volumes" Jan 31 06:19:40 crc kubenswrapper[5050]: I0131 06:19:40.598408 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f9e0474f-b8df-4860-80ad-e852d72f4071","Type":"ContainerStarted","Data":"da58ae2a3e4847148a723907b565a9f6566cae950f57b5cd3c88fe2b9f091e99"} Jan 31 06:19:43 crc kubenswrapper[5050]: I0131 06:19:43.630468 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f9e0474f-b8df-4860-80ad-e852d72f4071","Type":"ContainerStarted","Data":"2b1b6274e442013a940d7d90cfb018475e6d28b1a7c1cdfb7fe2f55bf4686b15"} Jan 31 06:19:44 crc kubenswrapper[5050]: I0131 06:19:44.641426 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f9e0474f-b8df-4860-80ad-e852d72f4071","Type":"ContainerStarted","Data":"3695bbabced01dd562a7bb4915ba61b535affc6ab8a371db42506dab07bbeef4"} Jan 31 06:19:45 crc kubenswrapper[5050]: I0131 06:19:45.678282 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=7.678255614 podStartE2EDuration="7.678255614s" podCreationTimestamp="2026-01-31 06:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:19:45.672754867 +0000 UTC m=+3510.721916473" watchObservedRunningTime="2026-01-31 06:19:45.678255614 +0000 UTC m=+3510.727417210" Jan 31 06:19:48 crc kubenswrapper[5050]: I0131 06:19:48.679888 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6","Type":"ContainerStarted","Data":"0aa3fd98eecde0aca05a49abbb47af978522348126cc0e34303c43be595f5682"} Jan 31 06:19:48 crc kubenswrapper[5050]: I0131 06:19:48.976401 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 31 06:19:49 crc kubenswrapper[5050]: I0131 06:19:49.360230 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 31 06:19:49 crc kubenswrapper[5050]: I0131 06:19:49.689848 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="ceilometer-central-agent" containerID="cri-o://ef922b1fb7fcc0e456ae553de2554cfa170187e29930ab07da4a0f2ce617ac69" gracePeriod=30 Jan 31 06:19:49 crc kubenswrapper[5050]: I0131 06:19:49.690032 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 06:19:49 crc kubenswrapper[5050]: I0131 06:19:49.690175 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="ceilometer-notification-agent" containerID="cri-o://da631d87513d73c2b380be06e3f65c63588b7debbde7b747b1ae77d662552a30" gracePeriod=30 Jan 31 06:19:49 crc kubenswrapper[5050]: I0131 06:19:49.690209 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="sg-core" containerID="cri-o://7f9da5f87dbea3b7b15525a7af658bbc7a6d8b03511df3cb273a14dd0e3447fb" gracePeriod=30 Jan 31 06:19:49 crc kubenswrapper[5050]: I0131 06:19:49.690175 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="proxy-httpd" containerID="cri-o://0aa3fd98eecde0aca05a49abbb47af978522348126cc0e34303c43be595f5682" gracePeriod=30 Jan 31 06:19:49 crc kubenswrapper[5050]: I0131 06:19:49.723916 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.685474169 podStartE2EDuration="34.723898947s" podCreationTimestamp="2026-01-31 06:19:15 +0000 UTC" firstStartedPulling="2026-01-31 06:19:16.205235037 +0000 UTC m=+3481.254396653" lastFinishedPulling="2026-01-31 06:19:47.243659835 +0000 UTC m=+3512.292821431" observedRunningTime="2026-01-31 06:19:49.721081012 +0000 UTC m=+3514.770242608" watchObservedRunningTime="2026-01-31 06:19:49.723898947 +0000 UTC m=+3514.773060543" Jan 31 06:19:50 crc kubenswrapper[5050]: I0131 06:19:50.699133 5050 generic.go:334] "Generic (PLEG): container finished" podID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerID="0aa3fd98eecde0aca05a49abbb47af978522348126cc0e34303c43be595f5682" exitCode=0 Jan 31 06:19:50 crc kubenswrapper[5050]: I0131 06:19:50.699698 5050 generic.go:334] "Generic (PLEG): container finished" podID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerID="7f9da5f87dbea3b7b15525a7af658bbc7a6d8b03511df3cb273a14dd0e3447fb" exitCode=2 Jan 31 06:19:50 crc kubenswrapper[5050]: I0131 06:19:50.699712 5050 generic.go:334] "Generic (PLEG): container finished" podID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerID="ef922b1fb7fcc0e456ae553de2554cfa170187e29930ab07da4a0f2ce617ac69" exitCode=0 Jan 31 06:19:50 crc kubenswrapper[5050]: I0131 06:19:50.699169 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6","Type":"ContainerDied","Data":"0aa3fd98eecde0aca05a49abbb47af978522348126cc0e34303c43be595f5682"} Jan 31 06:19:50 crc kubenswrapper[5050]: I0131 06:19:50.699744 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6","Type":"ContainerDied","Data":"7f9da5f87dbea3b7b15525a7af658bbc7a6d8b03511df3cb273a14dd0e3447fb"} Jan 31 06:19:50 crc kubenswrapper[5050]: I0131 06:19:50.699756 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6","Type":"ContainerDied","Data":"ef922b1fb7fcc0e456ae553de2554cfa170187e29930ab07da4a0f2ce617ac69"} Jan 31 06:19:53 crc kubenswrapper[5050]: I0131 06:19:53.740980 5050 generic.go:334] "Generic (PLEG): container finished" podID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerID="da631d87513d73c2b380be06e3f65c63588b7debbde7b747b1ae77d662552a30" exitCode=0 Jan 31 06:19:53 crc kubenswrapper[5050]: I0131 06:19:53.744966 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6","Type":"ContainerDied","Data":"da631d87513d73c2b380be06e3f65c63588b7debbde7b747b1ae77d662552a30"} Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.440047 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.577628 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-config-data\") pod \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.577757 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-sg-core-conf-yaml\") pod \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.577861 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-scripts\") pod \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.577969 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-combined-ca-bundle\") pod \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.578032 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-log-httpd\") pod \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.578062 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-run-httpd\") pod \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.578115 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-ceilometer-tls-certs\") pod \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.578169 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvxgz\" (UniqueName: \"kubernetes.io/projected/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-kube-api-access-jvxgz\") pod \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\" (UID: \"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6\") " Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.579130 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" (UID: "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.579540 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" (UID: "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.584065 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-kube-api-access-jvxgz" (OuterVolumeSpecName: "kube-api-access-jvxgz") pod "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" (UID: "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6"). InnerVolumeSpecName "kube-api-access-jvxgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.584112 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-scripts" (OuterVolumeSpecName: "scripts") pod "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" (UID: "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.610924 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" (UID: "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.631115 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" (UID: "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.651500 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" (UID: "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.673705 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-config-data" (OuterVolumeSpecName: "config-data") pod "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" (UID: "3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.682029 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.682095 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.682109 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.682139 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.682148 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.682156 5050 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.682165 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvxgz\" (UniqueName: \"kubernetes.io/projected/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-kube-api-access-jvxgz\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.682175 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.753617 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6","Type":"ContainerDied","Data":"8c09d3f7b2cd9248c9b9a8567c274655da4cfd740a32139039e7bfdeedc12d4e"} Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.753699 5050 scope.go:117] "RemoveContainer" containerID="0aa3fd98eecde0aca05a49abbb47af978522348126cc0e34303c43be595f5682" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.753760 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.777830 5050 scope.go:117] "RemoveContainer" containerID="7f9da5f87dbea3b7b15525a7af658bbc7a6d8b03511df3cb273a14dd0e3447fb" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.799144 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.819552 5050 scope.go:117] "RemoveContainer" containerID="da631d87513d73c2b380be06e3f65c63588b7debbde7b747b1ae77d662552a30" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.831373 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.842568 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 06:19:54 crc kubenswrapper[5050]: E0131 06:19:54.843139 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="sg-core" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.843163 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="sg-core" Jan 31 06:19:54 crc kubenswrapper[5050]: E0131 06:19:54.843189 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="proxy-httpd" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.843199 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="proxy-httpd" Jan 31 06:19:54 crc kubenswrapper[5050]: E0131 06:19:54.843211 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="ceilometer-central-agent" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.843220 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="ceilometer-central-agent" Jan 31 06:19:54 crc kubenswrapper[5050]: E0131 06:19:54.843233 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="ceilometer-notification-agent" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.843241 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="ceilometer-notification-agent" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.843525 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="proxy-httpd" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.843557 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="ceilometer-notification-agent" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.843576 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="ceilometer-central-agent" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.843591 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" containerName="sg-core" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.845784 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.846637 5050 scope.go:117] "RemoveContainer" containerID="ef922b1fb7fcc0e456ae553de2554cfa170187e29930ab07da4a0f2ce617ac69" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.850681 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.850801 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.850921 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.858382 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.991373 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-log-httpd\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.991495 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-scripts\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.991545 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.991574 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-config-data\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.991599 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.991632 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2wtq\" (UniqueName: \"kubernetes.io/projected/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-kube-api-access-p2wtq\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.991678 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:54 crc kubenswrapper[5050]: I0131 06:19:54.991738 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-run-httpd\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.093533 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-log-httpd\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.093638 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-scripts\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.093679 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.093711 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-config-data\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.093732 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.093760 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2wtq\" (UniqueName: \"kubernetes.io/projected/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-kube-api-access-p2wtq\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.093802 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.093851 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-run-httpd\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.094510 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-run-httpd\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.095154 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-log-httpd\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.099934 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.101812 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-config-data\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.112714 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-scripts\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.113261 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.114116 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.116903 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2wtq\" (UniqueName: \"kubernetes.io/projected/4ee96caa-81d3-4f74-80ae-2f8b57a94d96-kube-api-access-p2wtq\") pod \"ceilometer-0\" (UID: \"4ee96caa-81d3-4f74-80ae-2f8b57a94d96\") " pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.174223 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.631420 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 06:19:55 crc kubenswrapper[5050]: W0131 06:19:55.646967 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ee96caa_81d3_4f74_80ae_2f8b57a94d96.slice/crio-fe0227bc7f0857ebf552afdab44e96c36555a29385e25135624c99f26e0242df WatchSource:0}: Error finding container fe0227bc7f0857ebf552afdab44e96c36555a29385e25135624c99f26e0242df: Status 404 returned error can't find the container with id fe0227bc7f0857ebf552afdab44e96c36555a29385e25135624c99f26e0242df Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.749627 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6" path="/var/lib/kubelet/pods/3cadbc3f-d037-4ab3-8153-dfff8c2f5cf6/volumes" Jan 31 06:19:55 crc kubenswrapper[5050]: I0131 06:19:55.766105 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ee96caa-81d3-4f74-80ae-2f8b57a94d96","Type":"ContainerStarted","Data":"fe0227bc7f0857ebf552afdab44e96c36555a29385e25135624c99f26e0242df"} Jan 31 06:19:59 crc kubenswrapper[5050]: I0131 06:19:59.804046 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ee96caa-81d3-4f74-80ae-2f8b57a94d96","Type":"ContainerStarted","Data":"ce3b9fae133b3b33b64bf3541b9b3de0f2e08ee194ce24cf11726765325664fb"} Jan 31 06:20:00 crc kubenswrapper[5050]: I0131 06:20:00.455526 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 31 06:20:07 crc kubenswrapper[5050]: I0131 06:20:07.889442 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ee96caa-81d3-4f74-80ae-2f8b57a94d96","Type":"ContainerStarted","Data":"2a730e0fce413527feaac0526381e128cebb92f759392b8907970ba8374b0235"} Jan 31 06:20:11 crc kubenswrapper[5050]: I0131 06:20:11.926437 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ee96caa-81d3-4f74-80ae-2f8b57a94d96","Type":"ContainerStarted","Data":"474d38663374964cd3969237920fb670f616ff61a474b51479028bb2831961ae"} Jan 31 06:20:21 crc kubenswrapper[5050]: I0131 06:20:21.005022 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ee96caa-81d3-4f74-80ae-2f8b57a94d96","Type":"ContainerStarted","Data":"d81082e45158b44dcd2736655f19003fcfec6e5e162da2802eab346f8ff0f858"} Jan 31 06:20:21 crc kubenswrapper[5050]: I0131 06:20:21.005642 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 06:20:21 crc kubenswrapper[5050]: I0131 06:20:21.031197 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.557133515 podStartE2EDuration="27.031179059s" podCreationTimestamp="2026-01-31 06:19:54 +0000 UTC" firstStartedPulling="2026-01-31 06:19:55.64917868 +0000 UTC m=+3520.698340266" lastFinishedPulling="2026-01-31 06:20:19.123224214 +0000 UTC m=+3544.172385810" observedRunningTime="2026-01-31 06:20:21.021876181 +0000 UTC m=+3546.071037777" watchObservedRunningTime="2026-01-31 06:20:21.031179059 +0000 UTC m=+3546.080340655" Jan 31 06:20:55 crc kubenswrapper[5050]: I0131 06:20:55.184649 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 06:21:39 crc kubenswrapper[5050]: I0131 06:21:39.018786 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:21:39 crc kubenswrapper[5050]: I0131 06:21:39.019319 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:22:09 crc kubenswrapper[5050]: I0131 06:22:09.018745 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:22:09 crc kubenswrapper[5050]: I0131 06:22:09.019374 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.107014 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.108530 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.111352 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.111752 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-g8z7s" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.111784 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.111933 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.142714 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.204460 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.204497 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.204583 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/35f7d3c2-6102-4838-ae18-e42d9d69e172-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.204626 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/35f7d3c2-6102-4838-ae18-e42d9d69e172-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.204730 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpjgf\" (UniqueName: \"kubernetes.io/projected/35f7d3c2-6102-4838-ae18-e42d9d69e172-kube-api-access-xpjgf\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.204754 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/35f7d3c2-6102-4838-ae18-e42d9d69e172-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.204797 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.204868 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35f7d3c2-6102-4838-ae18-e42d9d69e172-config-data\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.204912 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.306632 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpjgf\" (UniqueName: \"kubernetes.io/projected/35f7d3c2-6102-4838-ae18-e42d9d69e172-kube-api-access-xpjgf\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.306694 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/35f7d3c2-6102-4838-ae18-e42d9d69e172-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.306728 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.307397 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/35f7d3c2-6102-4838-ae18-e42d9d69e172-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.308099 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35f7d3c2-6102-4838-ae18-e42d9d69e172-config-data\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.308215 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.308336 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.308362 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.308472 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/35f7d3c2-6102-4838-ae18-e42d9d69e172-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.308577 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/35f7d3c2-6102-4838-ae18-e42d9d69e172-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.308810 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.309685 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35f7d3c2-6102-4838-ae18-e42d9d69e172-config-data\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.309800 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/35f7d3c2-6102-4838-ae18-e42d9d69e172-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.310079 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/35f7d3c2-6102-4838-ae18-e42d9d69e172-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.312866 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.314020 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.320677 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.333750 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpjgf\" (UniqueName: \"kubernetes.io/projected/35f7d3c2-6102-4838-ae18-e42d9d69e172-kube-api-access-xpjgf\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.344479 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.433569 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 06:22:10 crc kubenswrapper[5050]: I0131 06:22:10.901999 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 06:22:11 crc kubenswrapper[5050]: I0131 06:22:11.055279 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"35f7d3c2-6102-4838-ae18-e42d9d69e172","Type":"ContainerStarted","Data":"e9951cabb094d1e1b786b36baf03b32a266763012bef9983787653306f7e8deb"} Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.658711 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hr5jn"] Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.662808 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.672301 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hr5jn"] Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.815551 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37e405d-0e41-4267-8a33-617cb2b82acd-utilities\") pod \"redhat-marketplace-hr5jn\" (UID: \"a37e405d-0e41-4267-8a33-617cb2b82acd\") " pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.815601 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm77x\" (UniqueName: \"kubernetes.io/projected/a37e405d-0e41-4267-8a33-617cb2b82acd-kube-api-access-nm77x\") pod \"redhat-marketplace-hr5jn\" (UID: \"a37e405d-0e41-4267-8a33-617cb2b82acd\") " pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.815646 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37e405d-0e41-4267-8a33-617cb2b82acd-catalog-content\") pod \"redhat-marketplace-hr5jn\" (UID: \"a37e405d-0e41-4267-8a33-617cb2b82acd\") " pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.917180 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37e405d-0e41-4267-8a33-617cb2b82acd-utilities\") pod \"redhat-marketplace-hr5jn\" (UID: \"a37e405d-0e41-4267-8a33-617cb2b82acd\") " pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.917239 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm77x\" (UniqueName: \"kubernetes.io/projected/a37e405d-0e41-4267-8a33-617cb2b82acd-kube-api-access-nm77x\") pod \"redhat-marketplace-hr5jn\" (UID: \"a37e405d-0e41-4267-8a33-617cb2b82acd\") " pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.917313 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37e405d-0e41-4267-8a33-617cb2b82acd-catalog-content\") pod \"redhat-marketplace-hr5jn\" (UID: \"a37e405d-0e41-4267-8a33-617cb2b82acd\") " pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.917811 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37e405d-0e41-4267-8a33-617cb2b82acd-utilities\") pod \"redhat-marketplace-hr5jn\" (UID: \"a37e405d-0e41-4267-8a33-617cb2b82acd\") " pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.917929 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37e405d-0e41-4267-8a33-617cb2b82acd-catalog-content\") pod \"redhat-marketplace-hr5jn\" (UID: \"a37e405d-0e41-4267-8a33-617cb2b82acd\") " pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.938034 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm77x\" (UniqueName: \"kubernetes.io/projected/a37e405d-0e41-4267-8a33-617cb2b82acd-kube-api-access-nm77x\") pod \"redhat-marketplace-hr5jn\" (UID: \"a37e405d-0e41-4267-8a33-617cb2b82acd\") " pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:22:23 crc kubenswrapper[5050]: I0131 06:22:23.988371 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:22:26 crc kubenswrapper[5050]: I0131 06:22:26.068803 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hr5jn"] Jan 31 06:22:39 crc kubenswrapper[5050]: I0131 06:22:39.018104 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:22:39 crc kubenswrapper[5050]: I0131 06:22:39.018754 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:22:39 crc kubenswrapper[5050]: I0131 06:22:39.018821 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 06:22:39 crc kubenswrapper[5050]: I0131 06:22:39.019580 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 06:22:39 crc kubenswrapper[5050]: I0131 06:22:39.019645 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" gracePeriod=600 Jan 31 06:23:06 crc kubenswrapper[5050]: I0131 06:23:06.048163 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-sbgrw"] Jan 31 06:23:06 crc kubenswrapper[5050]: I0131 06:23:06.058308 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-sbgrw"] Jan 31 06:23:06 crc kubenswrapper[5050]: I0131 06:23:06.566197 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" exitCode=0 Jan 31 06:23:06 crc kubenswrapper[5050]: I0131 06:23:06.566246 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a"} Jan 31 06:23:06 crc kubenswrapper[5050]: I0131 06:23:06.566300 5050 scope.go:117] "RemoveContainer" containerID="94806851aee0134f3d90df4319b62fedbb74408bf4a52f75abe44a79e6de8a38" Jan 31 06:23:07 crc kubenswrapper[5050]: I0131 06:23:07.748721 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f0fb7d-6777-449f-a447-b4a4fb534df8" path="/var/lib/kubelet/pods/28f0fb7d-6777-449f-a447-b4a4fb534df8/volumes" Jan 31 06:23:23 crc kubenswrapper[5050]: I0131 06:23:23.032450 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-de63-account-create-update-xrlkn"] Jan 31 06:23:23 crc kubenswrapper[5050]: I0131 06:23:23.040624 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-de63-account-create-update-xrlkn"] Jan 31 06:23:23 crc kubenswrapper[5050]: I0131 06:23:23.755530 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7a786d1-99eb-4c32-98c6-876fb67fb320" path="/var/lib/kubelet/pods/b7a786d1-99eb-4c32-98c6-876fb67fb320/volumes" Jan 31 06:23:33 crc kubenswrapper[5050]: I0131 06:23:33.547570 5050 scope.go:117] "RemoveContainer" containerID="1e1a005c97de7c0519e244cc2103adae0fe19182e252e721c725525c0a1437d3" Jan 31 06:23:56 crc kubenswrapper[5050]: I0131 06:23:56.902785 5050 scope.go:117] "RemoveContainer" containerID="67032e162857caf5cd47681d5b5744e52f380a039a3e24bbcd051f4af27c14dd" Jan 31 06:23:57 crc kubenswrapper[5050]: I0131 06:23:57.123317 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr5jn" event={"ID":"a37e405d-0e41-4267-8a33-617cb2b82acd","Type":"ContainerStarted","Data":"b3e99ad17744f023fb002099b6faee51e272fcb75ca99df39b5de17ada4439fc"} Jan 31 06:24:00 crc kubenswrapper[5050]: E0131 06:24:00.048277 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:24:00 crc kubenswrapper[5050]: I0131 06:24:00.151235 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:24:00 crc kubenswrapper[5050]: E0131 06:24:00.151688 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:24:02 crc kubenswrapper[5050]: I0131 06:24:02.188299 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr5jn" event={"ID":"a37e405d-0e41-4267-8a33-617cb2b82acd","Type":"ContainerStarted","Data":"44e1d4814772fb512c0781923ed8898bc4cd6b0b4bb28650b8465307d3ee7282"} Jan 31 06:24:03 crc kubenswrapper[5050]: I0131 06:24:03.200014 5050 generic.go:334] "Generic (PLEG): container finished" podID="a37e405d-0e41-4267-8a33-617cb2b82acd" containerID="44e1d4814772fb512c0781923ed8898bc4cd6b0b4bb28650b8465307d3ee7282" exitCode=0 Jan 31 06:24:03 crc kubenswrapper[5050]: I0131 06:24:03.200108 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr5jn" event={"ID":"a37e405d-0e41-4267-8a33-617cb2b82acd","Type":"ContainerDied","Data":"44e1d4814772fb512c0781923ed8898bc4cd6b0b4bb28650b8465307d3ee7282"} Jan 31 06:24:05 crc kubenswrapper[5050]: I0131 06:24:05.226213 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 06:24:13 crc kubenswrapper[5050]: I0131 06:24:13.737088 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:24:13 crc kubenswrapper[5050]: E0131 06:24:13.737981 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:24:26 crc kubenswrapper[5050]: I0131 06:24:26.736231 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:24:26 crc kubenswrapper[5050]: E0131 06:24:26.736918 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:24:33 crc kubenswrapper[5050]: I0131 06:24:33.473490 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr5jn" event={"ID":"a37e405d-0e41-4267-8a33-617cb2b82acd","Type":"ContainerStarted","Data":"8f0d02c82eb57ca7ffe2f656d9c01ccf02bf5e5408303938b564427cee8a9f2f"} Jan 31 06:24:34 crc kubenswrapper[5050]: I0131 06:24:34.484493 5050 generic.go:334] "Generic (PLEG): container finished" podID="a37e405d-0e41-4267-8a33-617cb2b82acd" containerID="8f0d02c82eb57ca7ffe2f656d9c01ccf02bf5e5408303938b564427cee8a9f2f" exitCode=0 Jan 31 06:24:34 crc kubenswrapper[5050]: I0131 06:24:34.484553 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr5jn" event={"ID":"a37e405d-0e41-4267-8a33-617cb2b82acd","Type":"ContainerDied","Data":"8f0d02c82eb57ca7ffe2f656d9c01ccf02bf5e5408303938b564427cee8a9f2f"} Jan 31 06:24:37 crc kubenswrapper[5050]: I0131 06:24:37.739007 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:24:37 crc kubenswrapper[5050]: E0131 06:24:37.741941 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:24:51 crc kubenswrapper[5050]: I0131 06:24:51.736524 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:24:51 crc kubenswrapper[5050]: E0131 06:24:51.737205 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:25:03 crc kubenswrapper[5050]: I0131 06:25:03.736350 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:25:03 crc kubenswrapper[5050]: E0131 06:25:03.739065 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:25:13 crc kubenswrapper[5050]: E0131 06:25:13.404240 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 31 06:25:13 crc kubenswrapper[5050]: E0131 06:25:13.404894 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xpjgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(35f7d3c2-6102-4838-ae18-e42d9d69e172): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 06:25:13 crc kubenswrapper[5050]: E0131 06:25:13.406065 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="35f7d3c2-6102-4838-ae18-e42d9d69e172" Jan 31 06:25:13 crc kubenswrapper[5050]: E0131 06:25:13.852368 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="35f7d3c2-6102-4838-ae18-e42d9d69e172" Jan 31 06:25:14 crc kubenswrapper[5050]: I0131 06:25:14.736031 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:25:14 crc kubenswrapper[5050]: E0131 06:25:14.736409 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:25:14 crc kubenswrapper[5050]: I0131 06:25:14.861003 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr5jn" event={"ID":"a37e405d-0e41-4267-8a33-617cb2b82acd","Type":"ContainerStarted","Data":"2637bc36990a0ac1c6d470c5b991d6975ca89042afd7da1f828ed20b9f085b29"} Jan 31 06:25:28 crc kubenswrapper[5050]: I0131 06:25:28.736877 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:25:28 crc kubenswrapper[5050]: E0131 06:25:28.737604 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:25:29 crc kubenswrapper[5050]: I0131 06:25:29.023475 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hr5jn" podStartSLOduration=118.424290153 podStartE2EDuration="3m6.023449875s" podCreationTimestamp="2026-01-31 06:22:23 +0000 UTC" firstStartedPulling="2026-01-31 06:24:05.22541411 +0000 UTC m=+3770.274575746" lastFinishedPulling="2026-01-31 06:25:12.824573872 +0000 UTC m=+3837.873735468" observedRunningTime="2026-01-31 06:25:28.997569427 +0000 UTC m=+3854.046731043" watchObservedRunningTime="2026-01-31 06:25:29.023449875 +0000 UTC m=+3854.072611481" Jan 31 06:25:30 crc kubenswrapper[5050]: I0131 06:25:30.772784 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 06:25:33 crc kubenswrapper[5050]: I0131 06:25:33.025821 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"35f7d3c2-6102-4838-ae18-e42d9d69e172","Type":"ContainerStarted","Data":"e49517ef8b685dfac24168e3c2e00eb735a38ada7332e31674932f0598b58171"} Jan 31 06:25:33 crc kubenswrapper[5050]: I0131 06:25:33.053634 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.190260469 podStartE2EDuration="3m24.053612913s" podCreationTimestamp="2026-01-31 06:22:09 +0000 UTC" firstStartedPulling="2026-01-31 06:22:10.906653772 +0000 UTC m=+3655.955815368" lastFinishedPulling="2026-01-31 06:25:30.770006216 +0000 UTC m=+3855.819167812" observedRunningTime="2026-01-31 06:25:33.049462201 +0000 UTC m=+3858.098623807" watchObservedRunningTime="2026-01-31 06:25:33.053612913 +0000 UTC m=+3858.102774509" Jan 31 06:25:33 crc kubenswrapper[5050]: I0131 06:25:33.991354 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:25:33 crc kubenswrapper[5050]: I0131 06:25:33.991404 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:25:34 crc kubenswrapper[5050]: I0131 06:25:34.031929 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:25:34 crc kubenswrapper[5050]: I0131 06:25:34.083229 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:25:34 crc kubenswrapper[5050]: I0131 06:25:34.268172 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hr5jn"] Jan 31 06:25:36 crc kubenswrapper[5050]: I0131 06:25:36.059342 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hr5jn" podUID="a37e405d-0e41-4267-8a33-617cb2b82acd" containerName="registry-server" containerID="cri-o://2637bc36990a0ac1c6d470c5b991d6975ca89042afd7da1f828ed20b9f085b29" gracePeriod=2 Jan 31 06:25:37 crc kubenswrapper[5050]: I0131 06:25:37.070895 5050 generic.go:334] "Generic (PLEG): container finished" podID="a37e405d-0e41-4267-8a33-617cb2b82acd" containerID="2637bc36990a0ac1c6d470c5b991d6975ca89042afd7da1f828ed20b9f085b29" exitCode=0 Jan 31 06:25:37 crc kubenswrapper[5050]: I0131 06:25:37.071048 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr5jn" event={"ID":"a37e405d-0e41-4267-8a33-617cb2b82acd","Type":"ContainerDied","Data":"2637bc36990a0ac1c6d470c5b991d6975ca89042afd7da1f828ed20b9f085b29"} Jan 31 06:25:37 crc kubenswrapper[5050]: I0131 06:25:37.509882 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:25:37 crc kubenswrapper[5050]: I0131 06:25:37.610123 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm77x\" (UniqueName: \"kubernetes.io/projected/a37e405d-0e41-4267-8a33-617cb2b82acd-kube-api-access-nm77x\") pod \"a37e405d-0e41-4267-8a33-617cb2b82acd\" (UID: \"a37e405d-0e41-4267-8a33-617cb2b82acd\") " Jan 31 06:25:37 crc kubenswrapper[5050]: I0131 06:25:37.610548 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37e405d-0e41-4267-8a33-617cb2b82acd-catalog-content\") pod \"a37e405d-0e41-4267-8a33-617cb2b82acd\" (UID: \"a37e405d-0e41-4267-8a33-617cb2b82acd\") " Jan 31 06:25:37 crc kubenswrapper[5050]: I0131 06:25:37.610670 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37e405d-0e41-4267-8a33-617cb2b82acd-utilities\") pod \"a37e405d-0e41-4267-8a33-617cb2b82acd\" (UID: \"a37e405d-0e41-4267-8a33-617cb2b82acd\") " Jan 31 06:25:37 crc kubenswrapper[5050]: I0131 06:25:37.611443 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a37e405d-0e41-4267-8a33-617cb2b82acd-utilities" (OuterVolumeSpecName: "utilities") pod "a37e405d-0e41-4267-8a33-617cb2b82acd" (UID: "a37e405d-0e41-4267-8a33-617cb2b82acd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:25:37 crc kubenswrapper[5050]: I0131 06:25:37.616516 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37e405d-0e41-4267-8a33-617cb2b82acd-kube-api-access-nm77x" (OuterVolumeSpecName: "kube-api-access-nm77x") pod "a37e405d-0e41-4267-8a33-617cb2b82acd" (UID: "a37e405d-0e41-4267-8a33-617cb2b82acd"). InnerVolumeSpecName "kube-api-access-nm77x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:25:37 crc kubenswrapper[5050]: I0131 06:25:37.632850 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a37e405d-0e41-4267-8a33-617cb2b82acd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a37e405d-0e41-4267-8a33-617cb2b82acd" (UID: "a37e405d-0e41-4267-8a33-617cb2b82acd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:25:37 crc kubenswrapper[5050]: I0131 06:25:37.713839 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm77x\" (UniqueName: \"kubernetes.io/projected/a37e405d-0e41-4267-8a33-617cb2b82acd-kube-api-access-nm77x\") on node \"crc\" DevicePath \"\"" Jan 31 06:25:37 crc kubenswrapper[5050]: I0131 06:25:37.713880 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37e405d-0e41-4267-8a33-617cb2b82acd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:25:37 crc kubenswrapper[5050]: I0131 06:25:37.713924 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37e405d-0e41-4267-8a33-617cb2b82acd-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:25:38 crc kubenswrapper[5050]: I0131 06:25:38.097348 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr5jn" event={"ID":"a37e405d-0e41-4267-8a33-617cb2b82acd","Type":"ContainerDied","Data":"b3e99ad17744f023fb002099b6faee51e272fcb75ca99df39b5de17ada4439fc"} Jan 31 06:25:38 crc kubenswrapper[5050]: I0131 06:25:38.098595 5050 scope.go:117] "RemoveContainer" containerID="2637bc36990a0ac1c6d470c5b991d6975ca89042afd7da1f828ed20b9f085b29" Jan 31 06:25:38 crc kubenswrapper[5050]: I0131 06:25:38.098837 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hr5jn" Jan 31 06:25:38 crc kubenswrapper[5050]: I0131 06:25:38.126581 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hr5jn"] Jan 31 06:25:38 crc kubenswrapper[5050]: I0131 06:25:38.129139 5050 scope.go:117] "RemoveContainer" containerID="8f0d02c82eb57ca7ffe2f656d9c01ccf02bf5e5408303938b564427cee8a9f2f" Jan 31 06:25:38 crc kubenswrapper[5050]: I0131 06:25:38.138037 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hr5jn"] Jan 31 06:25:38 crc kubenswrapper[5050]: I0131 06:25:38.152859 5050 scope.go:117] "RemoveContainer" containerID="44e1d4814772fb512c0781923ed8898bc4cd6b0b4bb28650b8465307d3ee7282" Jan 31 06:25:39 crc kubenswrapper[5050]: I0131 06:25:39.750716 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a37e405d-0e41-4267-8a33-617cb2b82acd" path="/var/lib/kubelet/pods/a37e405d-0e41-4267-8a33-617cb2b82acd/volumes" Jan 31 06:25:43 crc kubenswrapper[5050]: I0131 06:25:43.736126 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:25:43 crc kubenswrapper[5050]: E0131 06:25:43.736726 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:25:56 crc kubenswrapper[5050]: I0131 06:25:56.736254 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:25:56 crc kubenswrapper[5050]: E0131 06:25:56.737129 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:26:08 crc kubenswrapper[5050]: I0131 06:26:08.736722 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:26:08 crc kubenswrapper[5050]: E0131 06:26:08.737574 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:26:19 crc kubenswrapper[5050]: I0131 06:26:19.736417 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:26:19 crc kubenswrapper[5050]: E0131 06:26:19.737359 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:26:22 crc kubenswrapper[5050]: I0131 06:26:22.862417 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7s8bl"] Jan 31 06:26:22 crc kubenswrapper[5050]: E0131 06:26:22.864015 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37e405d-0e41-4267-8a33-617cb2b82acd" containerName="extract-utilities" Jan 31 06:26:22 crc kubenswrapper[5050]: I0131 06:26:22.864041 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37e405d-0e41-4267-8a33-617cb2b82acd" containerName="extract-utilities" Jan 31 06:26:22 crc kubenswrapper[5050]: E0131 06:26:22.864065 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37e405d-0e41-4267-8a33-617cb2b82acd" containerName="registry-server" Jan 31 06:26:22 crc kubenswrapper[5050]: I0131 06:26:22.864073 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37e405d-0e41-4267-8a33-617cb2b82acd" containerName="registry-server" Jan 31 06:26:22 crc kubenswrapper[5050]: E0131 06:26:22.864096 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37e405d-0e41-4267-8a33-617cb2b82acd" containerName="extract-content" Jan 31 06:26:22 crc kubenswrapper[5050]: I0131 06:26:22.864104 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37e405d-0e41-4267-8a33-617cb2b82acd" containerName="extract-content" Jan 31 06:26:22 crc kubenswrapper[5050]: I0131 06:26:22.864584 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37e405d-0e41-4267-8a33-617cb2b82acd" containerName="registry-server" Jan 31 06:26:22 crc kubenswrapper[5050]: I0131 06:26:22.869882 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:22 crc kubenswrapper[5050]: I0131 06:26:22.888806 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7s8bl"] Jan 31 06:26:22 crc kubenswrapper[5050]: I0131 06:26:22.989827 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29298f8-2e67-41d4-b494-4939155deb19-catalog-content\") pod \"community-operators-7s8bl\" (UID: \"e29298f8-2e67-41d4-b494-4939155deb19\") " pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:22 crc kubenswrapper[5050]: I0131 06:26:22.989970 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29298f8-2e67-41d4-b494-4939155deb19-utilities\") pod \"community-operators-7s8bl\" (UID: \"e29298f8-2e67-41d4-b494-4939155deb19\") " pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:22 crc kubenswrapper[5050]: I0131 06:26:22.990081 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d2nv\" (UniqueName: \"kubernetes.io/projected/e29298f8-2e67-41d4-b494-4939155deb19-kube-api-access-9d2nv\") pod \"community-operators-7s8bl\" (UID: \"e29298f8-2e67-41d4-b494-4939155deb19\") " pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:23 crc kubenswrapper[5050]: I0131 06:26:23.092321 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29298f8-2e67-41d4-b494-4939155deb19-catalog-content\") pod \"community-operators-7s8bl\" (UID: \"e29298f8-2e67-41d4-b494-4939155deb19\") " pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:23 crc kubenswrapper[5050]: I0131 06:26:23.092368 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29298f8-2e67-41d4-b494-4939155deb19-utilities\") pod \"community-operators-7s8bl\" (UID: \"e29298f8-2e67-41d4-b494-4939155deb19\") " pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:23 crc kubenswrapper[5050]: I0131 06:26:23.092453 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d2nv\" (UniqueName: \"kubernetes.io/projected/e29298f8-2e67-41d4-b494-4939155deb19-kube-api-access-9d2nv\") pod \"community-operators-7s8bl\" (UID: \"e29298f8-2e67-41d4-b494-4939155deb19\") " pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:23 crc kubenswrapper[5050]: I0131 06:26:23.092866 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29298f8-2e67-41d4-b494-4939155deb19-catalog-content\") pod \"community-operators-7s8bl\" (UID: \"e29298f8-2e67-41d4-b494-4939155deb19\") " pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:23 crc kubenswrapper[5050]: I0131 06:26:23.093224 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29298f8-2e67-41d4-b494-4939155deb19-utilities\") pod \"community-operators-7s8bl\" (UID: \"e29298f8-2e67-41d4-b494-4939155deb19\") " pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:23 crc kubenswrapper[5050]: I0131 06:26:23.115767 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d2nv\" (UniqueName: \"kubernetes.io/projected/e29298f8-2e67-41d4-b494-4939155deb19-kube-api-access-9d2nv\") pod \"community-operators-7s8bl\" (UID: \"e29298f8-2e67-41d4-b494-4939155deb19\") " pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:23 crc kubenswrapper[5050]: I0131 06:26:23.212704 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:23 crc kubenswrapper[5050]: I0131 06:26:23.731553 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7s8bl"] Jan 31 06:26:24 crc kubenswrapper[5050]: I0131 06:26:24.517246 5050 generic.go:334] "Generic (PLEG): container finished" podID="e29298f8-2e67-41d4-b494-4939155deb19" containerID="6ef529cc9efcd2f4282f7f283bdfb8eb4f84e77a5afcaae628f0c8d489ee5bad" exitCode=0 Jan 31 06:26:24 crc kubenswrapper[5050]: I0131 06:26:24.517301 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s8bl" event={"ID":"e29298f8-2e67-41d4-b494-4939155deb19","Type":"ContainerDied","Data":"6ef529cc9efcd2f4282f7f283bdfb8eb4f84e77a5afcaae628f0c8d489ee5bad"} Jan 31 06:26:24 crc kubenswrapper[5050]: I0131 06:26:24.517328 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s8bl" event={"ID":"e29298f8-2e67-41d4-b494-4939155deb19","Type":"ContainerStarted","Data":"0fab9e6b1ace97009830a79212793ce4cecd2be88b91823f3384d80ed0b977f8"} Jan 31 06:26:30 crc kubenswrapper[5050]: I0131 06:26:30.567347 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s8bl" event={"ID":"e29298f8-2e67-41d4-b494-4939155deb19","Type":"ContainerStarted","Data":"e2c7bd0b12ac06d571561bfa7a0b8038710f55a01bd39c2c9fe2f46af64d1d1b"} Jan 31 06:26:30 crc kubenswrapper[5050]: I0131 06:26:30.736586 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:26:30 crc kubenswrapper[5050]: E0131 06:26:30.736890 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:26:34 crc kubenswrapper[5050]: I0131 06:26:34.604614 5050 generic.go:334] "Generic (PLEG): container finished" podID="e29298f8-2e67-41d4-b494-4939155deb19" containerID="e2c7bd0b12ac06d571561bfa7a0b8038710f55a01bd39c2c9fe2f46af64d1d1b" exitCode=0 Jan 31 06:26:34 crc kubenswrapper[5050]: I0131 06:26:34.604698 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s8bl" event={"ID":"e29298f8-2e67-41d4-b494-4939155deb19","Type":"ContainerDied","Data":"e2c7bd0b12ac06d571561bfa7a0b8038710f55a01bd39c2c9fe2f46af64d1d1b"} Jan 31 06:26:38 crc kubenswrapper[5050]: I0131 06:26:38.644085 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s8bl" event={"ID":"e29298f8-2e67-41d4-b494-4939155deb19","Type":"ContainerStarted","Data":"77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576"} Jan 31 06:26:38 crc kubenswrapper[5050]: I0131 06:26:38.675466 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7s8bl" podStartSLOduration=3.803571532 podStartE2EDuration="16.675449933s" podCreationTimestamp="2026-01-31 06:26:22 +0000 UTC" firstStartedPulling="2026-01-31 06:26:24.519398712 +0000 UTC m=+3909.568560308" lastFinishedPulling="2026-01-31 06:26:37.391277113 +0000 UTC m=+3922.440438709" observedRunningTime="2026-01-31 06:26:38.660787557 +0000 UTC m=+3923.709949153" watchObservedRunningTime="2026-01-31 06:26:38.675449933 +0000 UTC m=+3923.724611529" Jan 31 06:26:41 crc kubenswrapper[5050]: I0131 06:26:41.737016 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:26:41 crc kubenswrapper[5050]: E0131 06:26:41.738028 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:26:43 crc kubenswrapper[5050]: I0131 06:26:43.213513 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:43 crc kubenswrapper[5050]: I0131 06:26:43.214206 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:43 crc kubenswrapper[5050]: I0131 06:26:43.263429 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:43 crc kubenswrapper[5050]: I0131 06:26:43.730813 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:43 crc kubenswrapper[5050]: I0131 06:26:43.784924 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7s8bl"] Jan 31 06:26:45 crc kubenswrapper[5050]: I0131 06:26:45.702762 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7s8bl" podUID="e29298f8-2e67-41d4-b494-4939155deb19" containerName="registry-server" containerID="cri-o://77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576" gracePeriod=2 Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.229347 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.294612 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29298f8-2e67-41d4-b494-4939155deb19-utilities\") pod \"e29298f8-2e67-41d4-b494-4939155deb19\" (UID: \"e29298f8-2e67-41d4-b494-4939155deb19\") " Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.294673 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d2nv\" (UniqueName: \"kubernetes.io/projected/e29298f8-2e67-41d4-b494-4939155deb19-kube-api-access-9d2nv\") pod \"e29298f8-2e67-41d4-b494-4939155deb19\" (UID: \"e29298f8-2e67-41d4-b494-4939155deb19\") " Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.294734 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29298f8-2e67-41d4-b494-4939155deb19-catalog-content\") pod \"e29298f8-2e67-41d4-b494-4939155deb19\" (UID: \"e29298f8-2e67-41d4-b494-4939155deb19\") " Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.295479 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e29298f8-2e67-41d4-b494-4939155deb19-utilities" (OuterVolumeSpecName: "utilities") pod "e29298f8-2e67-41d4-b494-4939155deb19" (UID: "e29298f8-2e67-41d4-b494-4939155deb19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.301276 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29298f8-2e67-41d4-b494-4939155deb19-kube-api-access-9d2nv" (OuterVolumeSpecName: "kube-api-access-9d2nv") pod "e29298f8-2e67-41d4-b494-4939155deb19" (UID: "e29298f8-2e67-41d4-b494-4939155deb19"). InnerVolumeSpecName "kube-api-access-9d2nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.345216 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e29298f8-2e67-41d4-b494-4939155deb19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e29298f8-2e67-41d4-b494-4939155deb19" (UID: "e29298f8-2e67-41d4-b494-4939155deb19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.396482 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29298f8-2e67-41d4-b494-4939155deb19-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.396516 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d2nv\" (UniqueName: \"kubernetes.io/projected/e29298f8-2e67-41d4-b494-4939155deb19-kube-api-access-9d2nv\") on node \"crc\" DevicePath \"\"" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.396525 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29298f8-2e67-41d4-b494-4939155deb19-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.711249 5050 generic.go:334] "Generic (PLEG): container finished" podID="e29298f8-2e67-41d4-b494-4939155deb19" containerID="77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576" exitCode=0 Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.711324 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7s8bl" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.711339 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s8bl" event={"ID":"e29298f8-2e67-41d4-b494-4939155deb19","Type":"ContainerDied","Data":"77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576"} Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.711701 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s8bl" event={"ID":"e29298f8-2e67-41d4-b494-4939155deb19","Type":"ContainerDied","Data":"0fab9e6b1ace97009830a79212793ce4cecd2be88b91823f3384d80ed0b977f8"} Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.711735 5050 scope.go:117] "RemoveContainer" containerID="77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.730006 5050 scope.go:117] "RemoveContainer" containerID="e2c7bd0b12ac06d571561bfa7a0b8038710f55a01bd39c2c9fe2f46af64d1d1b" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.745227 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7s8bl"] Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.752799 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7s8bl"] Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.777066 5050 scope.go:117] "RemoveContainer" containerID="6ef529cc9efcd2f4282f7f283bdfb8eb4f84e77a5afcaae628f0c8d489ee5bad" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.801589 5050 scope.go:117] "RemoveContainer" containerID="77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576" Jan 31 06:26:46 crc kubenswrapper[5050]: E0131 06:26:46.801999 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576\": container with ID starting with 77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576 not found: ID does not exist" containerID="77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.802029 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576"} err="failed to get container status \"77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576\": rpc error: code = NotFound desc = could not find container \"77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576\": container with ID starting with 77c11b2a07897bd4da917266686f0215bddd113f16256d803cfad8175af89576 not found: ID does not exist" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.802049 5050 scope.go:117] "RemoveContainer" containerID="e2c7bd0b12ac06d571561bfa7a0b8038710f55a01bd39c2c9fe2f46af64d1d1b" Jan 31 06:26:46 crc kubenswrapper[5050]: E0131 06:26:46.802248 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c7bd0b12ac06d571561bfa7a0b8038710f55a01bd39c2c9fe2f46af64d1d1b\": container with ID starting with e2c7bd0b12ac06d571561bfa7a0b8038710f55a01bd39c2c9fe2f46af64d1d1b not found: ID does not exist" containerID="e2c7bd0b12ac06d571561bfa7a0b8038710f55a01bd39c2c9fe2f46af64d1d1b" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.802266 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c7bd0b12ac06d571561bfa7a0b8038710f55a01bd39c2c9fe2f46af64d1d1b"} err="failed to get container status \"e2c7bd0b12ac06d571561bfa7a0b8038710f55a01bd39c2c9fe2f46af64d1d1b\": rpc error: code = NotFound desc = could not find container \"e2c7bd0b12ac06d571561bfa7a0b8038710f55a01bd39c2c9fe2f46af64d1d1b\": container with ID starting with e2c7bd0b12ac06d571561bfa7a0b8038710f55a01bd39c2c9fe2f46af64d1d1b not found: ID does not exist" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.802279 5050 scope.go:117] "RemoveContainer" containerID="6ef529cc9efcd2f4282f7f283bdfb8eb4f84e77a5afcaae628f0c8d489ee5bad" Jan 31 06:26:46 crc kubenswrapper[5050]: E0131 06:26:46.802453 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef529cc9efcd2f4282f7f283bdfb8eb4f84e77a5afcaae628f0c8d489ee5bad\": container with ID starting with 6ef529cc9efcd2f4282f7f283bdfb8eb4f84e77a5afcaae628f0c8d489ee5bad not found: ID does not exist" containerID="6ef529cc9efcd2f4282f7f283bdfb8eb4f84e77a5afcaae628f0c8d489ee5bad" Jan 31 06:26:46 crc kubenswrapper[5050]: I0131 06:26:46.802467 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef529cc9efcd2f4282f7f283bdfb8eb4f84e77a5afcaae628f0c8d489ee5bad"} err="failed to get container status \"6ef529cc9efcd2f4282f7f283bdfb8eb4f84e77a5afcaae628f0c8d489ee5bad\": rpc error: code = NotFound desc = could not find container \"6ef529cc9efcd2f4282f7f283bdfb8eb4f84e77a5afcaae628f0c8d489ee5bad\": container with ID starting with 6ef529cc9efcd2f4282f7f283bdfb8eb4f84e77a5afcaae628f0c8d489ee5bad not found: ID does not exist" Jan 31 06:26:47 crc kubenswrapper[5050]: I0131 06:26:47.753472 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e29298f8-2e67-41d4-b494-4939155deb19" path="/var/lib/kubelet/pods/e29298f8-2e67-41d4-b494-4939155deb19/volumes" Jan 31 06:26:55 crc kubenswrapper[5050]: I0131 06:26:55.743255 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:26:55 crc kubenswrapper[5050]: E0131 06:26:55.744118 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:27:06 crc kubenswrapper[5050]: I0131 06:27:06.736067 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:27:06 crc kubenswrapper[5050]: E0131 06:27:06.736791 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:27:19 crc kubenswrapper[5050]: I0131 06:27:19.736389 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:27:19 crc kubenswrapper[5050]: E0131 06:27:19.737237 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:27:32 crc kubenswrapper[5050]: I0131 06:27:32.737029 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:27:32 crc kubenswrapper[5050]: E0131 06:27:32.737801 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.623578 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ktfxt"] Jan 31 06:27:38 crc kubenswrapper[5050]: E0131 06:27:38.624922 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29298f8-2e67-41d4-b494-4939155deb19" containerName="registry-server" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.624940 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29298f8-2e67-41d4-b494-4939155deb19" containerName="registry-server" Jan 31 06:27:38 crc kubenswrapper[5050]: E0131 06:27:38.624996 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29298f8-2e67-41d4-b494-4939155deb19" containerName="extract-content" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.625019 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29298f8-2e67-41d4-b494-4939155deb19" containerName="extract-content" Jan 31 06:27:38 crc kubenswrapper[5050]: E0131 06:27:38.625057 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29298f8-2e67-41d4-b494-4939155deb19" containerName="extract-utilities" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.625066 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29298f8-2e67-41d4-b494-4939155deb19" containerName="extract-utilities" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.625481 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29298f8-2e67-41d4-b494-4939155deb19" containerName="registry-server" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.627307 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.640624 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ktfxt"] Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.788783 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c579e6b-44b5-4e40-816c-ce48884954bb-catalog-content\") pod \"certified-operators-ktfxt\" (UID: \"6c579e6b-44b5-4e40-816c-ce48884954bb\") " pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.789067 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c4wt\" (UniqueName: \"kubernetes.io/projected/6c579e6b-44b5-4e40-816c-ce48884954bb-kube-api-access-2c4wt\") pod \"certified-operators-ktfxt\" (UID: \"6c579e6b-44b5-4e40-816c-ce48884954bb\") " pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.789224 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c579e6b-44b5-4e40-816c-ce48884954bb-utilities\") pod \"certified-operators-ktfxt\" (UID: \"6c579e6b-44b5-4e40-816c-ce48884954bb\") " pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.891290 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c579e6b-44b5-4e40-816c-ce48884954bb-utilities\") pod \"certified-operators-ktfxt\" (UID: \"6c579e6b-44b5-4e40-816c-ce48884954bb\") " pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.891466 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c579e6b-44b5-4e40-816c-ce48884954bb-catalog-content\") pod \"certified-operators-ktfxt\" (UID: \"6c579e6b-44b5-4e40-816c-ce48884954bb\") " pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.891548 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c4wt\" (UniqueName: \"kubernetes.io/projected/6c579e6b-44b5-4e40-816c-ce48884954bb-kube-api-access-2c4wt\") pod \"certified-operators-ktfxt\" (UID: \"6c579e6b-44b5-4e40-816c-ce48884954bb\") " pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.892284 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c579e6b-44b5-4e40-816c-ce48884954bb-utilities\") pod \"certified-operators-ktfxt\" (UID: \"6c579e6b-44b5-4e40-816c-ce48884954bb\") " pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.892307 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c579e6b-44b5-4e40-816c-ce48884954bb-catalog-content\") pod \"certified-operators-ktfxt\" (UID: \"6c579e6b-44b5-4e40-816c-ce48884954bb\") " pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.915436 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c4wt\" (UniqueName: \"kubernetes.io/projected/6c579e6b-44b5-4e40-816c-ce48884954bb-kube-api-access-2c4wt\") pod \"certified-operators-ktfxt\" (UID: \"6c579e6b-44b5-4e40-816c-ce48884954bb\") " pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:27:38 crc kubenswrapper[5050]: I0131 06:27:38.985346 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:27:39 crc kubenswrapper[5050]: I0131 06:27:39.643667 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ktfxt"] Jan 31 06:27:40 crc kubenswrapper[5050]: I0131 06:27:40.145514 5050 generic.go:334] "Generic (PLEG): container finished" podID="6c579e6b-44b5-4e40-816c-ce48884954bb" containerID="a04a25dc8b9ddf5f140a28131300b128c792fc457bbdb495f471485128232a62" exitCode=0 Jan 31 06:27:40 crc kubenswrapper[5050]: I0131 06:27:40.145904 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktfxt" event={"ID":"6c579e6b-44b5-4e40-816c-ce48884954bb","Type":"ContainerDied","Data":"a04a25dc8b9ddf5f140a28131300b128c792fc457bbdb495f471485128232a62"} Jan 31 06:27:40 crc kubenswrapper[5050]: I0131 06:27:40.145939 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktfxt" event={"ID":"6c579e6b-44b5-4e40-816c-ce48884954bb","Type":"ContainerStarted","Data":"cff7b000b583b00e8c466991e3d7c85cf4a5ad79c26ef8b50798a8cee1b91c5b"} Jan 31 06:27:45 crc kubenswrapper[5050]: I0131 06:27:45.194286 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktfxt" event={"ID":"6c579e6b-44b5-4e40-816c-ce48884954bb","Type":"ContainerStarted","Data":"db442a4880ec71bd98a3df91cf7d706afb40653ac34e5f1f5f28c62010095c94"} Jan 31 06:27:46 crc kubenswrapper[5050]: I0131 06:27:46.204577 5050 generic.go:334] "Generic (PLEG): container finished" podID="6c579e6b-44b5-4e40-816c-ce48884954bb" containerID="db442a4880ec71bd98a3df91cf7d706afb40653ac34e5f1f5f28c62010095c94" exitCode=0 Jan 31 06:27:46 crc kubenswrapper[5050]: I0131 06:27:46.204644 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktfxt" event={"ID":"6c579e6b-44b5-4e40-816c-ce48884954bb","Type":"ContainerDied","Data":"db442a4880ec71bd98a3df91cf7d706afb40653ac34e5f1f5f28c62010095c94"} Jan 31 06:27:47 crc kubenswrapper[5050]: I0131 06:27:47.737480 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:27:47 crc kubenswrapper[5050]: E0131 06:27:47.738116 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:27:52 crc kubenswrapper[5050]: I0131 06:27:52.256090 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktfxt" event={"ID":"6c579e6b-44b5-4e40-816c-ce48884954bb","Type":"ContainerStarted","Data":"9c4e0049f580487658c511cfe0b05e1e5ee9b1834dc43b984ae85a70533deed4"} Jan 31 06:27:52 crc kubenswrapper[5050]: I0131 06:27:52.279315 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ktfxt" podStartSLOduration=3.429816533 podStartE2EDuration="14.279295782s" podCreationTimestamp="2026-01-31 06:27:38 +0000 UTC" firstStartedPulling="2026-01-31 06:27:40.149041687 +0000 UTC m=+3985.198203283" lastFinishedPulling="2026-01-31 06:27:50.998520926 +0000 UTC m=+3996.047682532" observedRunningTime="2026-01-31 06:27:52.278754208 +0000 UTC m=+3997.327915824" watchObservedRunningTime="2026-01-31 06:27:52.279295782 +0000 UTC m=+3997.328457398" Jan 31 06:27:58 crc kubenswrapper[5050]: I0131 06:27:58.985757 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:27:58 crc kubenswrapper[5050]: I0131 06:27:58.986309 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:28:00 crc kubenswrapper[5050]: I0131 06:28:00.036754 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ktfxt" podUID="6c579e6b-44b5-4e40-816c-ce48884954bb" containerName="registry-server" probeResult="failure" output=< Jan 31 06:28:00 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 06:28:00 crc kubenswrapper[5050]: > Jan 31 06:28:02 crc kubenswrapper[5050]: I0131 06:28:02.735888 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:28:02 crc kubenswrapper[5050]: E0131 06:28:02.736657 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.494288 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rs9rd"] Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.499559 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.536266 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rs9rd"] Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.629222 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e80febf1-fd85-4e73-baca-269c9ee21fa9-utilities\") pod \"redhat-operators-rs9rd\" (UID: \"e80febf1-fd85-4e73-baca-269c9ee21fa9\") " pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.629635 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e80febf1-fd85-4e73-baca-269c9ee21fa9-catalog-content\") pod \"redhat-operators-rs9rd\" (UID: \"e80febf1-fd85-4e73-baca-269c9ee21fa9\") " pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.629740 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6qmb\" (UniqueName: \"kubernetes.io/projected/e80febf1-fd85-4e73-baca-269c9ee21fa9-kube-api-access-t6qmb\") pod \"redhat-operators-rs9rd\" (UID: \"e80febf1-fd85-4e73-baca-269c9ee21fa9\") " pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.731266 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e80febf1-fd85-4e73-baca-269c9ee21fa9-utilities\") pod \"redhat-operators-rs9rd\" (UID: \"e80febf1-fd85-4e73-baca-269c9ee21fa9\") " pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.731797 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e80febf1-fd85-4e73-baca-269c9ee21fa9-catalog-content\") pod \"redhat-operators-rs9rd\" (UID: \"e80febf1-fd85-4e73-baca-269c9ee21fa9\") " pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.731929 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e80febf1-fd85-4e73-baca-269c9ee21fa9-utilities\") pod \"redhat-operators-rs9rd\" (UID: \"e80febf1-fd85-4e73-baca-269c9ee21fa9\") " pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.732061 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6qmb\" (UniqueName: \"kubernetes.io/projected/e80febf1-fd85-4e73-baca-269c9ee21fa9-kube-api-access-t6qmb\") pod \"redhat-operators-rs9rd\" (UID: \"e80febf1-fd85-4e73-baca-269c9ee21fa9\") " pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.732380 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e80febf1-fd85-4e73-baca-269c9ee21fa9-catalog-content\") pod \"redhat-operators-rs9rd\" (UID: \"e80febf1-fd85-4e73-baca-269c9ee21fa9\") " pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.753511 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6qmb\" (UniqueName: \"kubernetes.io/projected/e80febf1-fd85-4e73-baca-269c9ee21fa9-kube-api-access-t6qmb\") pod \"redhat-operators-rs9rd\" (UID: \"e80febf1-fd85-4e73-baca-269c9ee21fa9\") " pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:28:08 crc kubenswrapper[5050]: I0131 06:28:08.846739 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:28:09 crc kubenswrapper[5050]: I0131 06:28:09.054686 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:28:09 crc kubenswrapper[5050]: I0131 06:28:09.120570 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:28:09 crc kubenswrapper[5050]: I0131 06:28:09.395045 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rs9rd"] Jan 31 06:28:10 crc kubenswrapper[5050]: I0131 06:28:10.126988 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs9rd" event={"ID":"e80febf1-fd85-4e73-baca-269c9ee21fa9","Type":"ContainerStarted","Data":"a12d0773437bd638188d3ab53fcb965830dceae678db4a4a8f7ff3e0065f79ea"} Jan 31 06:28:10 crc kubenswrapper[5050]: I0131 06:28:10.127392 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs9rd" event={"ID":"e80febf1-fd85-4e73-baca-269c9ee21fa9","Type":"ContainerStarted","Data":"5d14762567c0958edd587d8574b7f0812c78a59855bfccd81af724678c92e281"} Jan 31 06:28:11 crc kubenswrapper[5050]: I0131 06:28:11.136150 5050 generic.go:334] "Generic (PLEG): container finished" podID="e80febf1-fd85-4e73-baca-269c9ee21fa9" containerID="a12d0773437bd638188d3ab53fcb965830dceae678db4a4a8f7ff3e0065f79ea" exitCode=0 Jan 31 06:28:11 crc kubenswrapper[5050]: I0131 06:28:11.136225 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs9rd" event={"ID":"e80febf1-fd85-4e73-baca-269c9ee21fa9","Type":"ContainerDied","Data":"a12d0773437bd638188d3ab53fcb965830dceae678db4a4a8f7ff3e0065f79ea"} Jan 31 06:28:11 crc kubenswrapper[5050]: I0131 06:28:11.454030 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ktfxt"] Jan 31 06:28:11 crc kubenswrapper[5050]: I0131 06:28:11.454265 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ktfxt" podUID="6c579e6b-44b5-4e40-816c-ce48884954bb" containerName="registry-server" containerID="cri-o://9c4e0049f580487658c511cfe0b05e1e5ee9b1834dc43b984ae85a70533deed4" gracePeriod=2 Jan 31 06:28:12 crc kubenswrapper[5050]: I0131 06:28:12.147487 5050 generic.go:334] "Generic (PLEG): container finished" podID="6c579e6b-44b5-4e40-816c-ce48884954bb" containerID="9c4e0049f580487658c511cfe0b05e1e5ee9b1834dc43b984ae85a70533deed4" exitCode=0 Jan 31 06:28:12 crc kubenswrapper[5050]: I0131 06:28:12.147577 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktfxt" event={"ID":"6c579e6b-44b5-4e40-816c-ce48884954bb","Type":"ContainerDied","Data":"9c4e0049f580487658c511cfe0b05e1e5ee9b1834dc43b984ae85a70533deed4"} Jan 31 06:28:12 crc kubenswrapper[5050]: I0131 06:28:12.546061 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:28:12 crc kubenswrapper[5050]: I0131 06:28:12.636640 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c579e6b-44b5-4e40-816c-ce48884954bb-utilities\") pod \"6c579e6b-44b5-4e40-816c-ce48884954bb\" (UID: \"6c579e6b-44b5-4e40-816c-ce48884954bb\") " Jan 31 06:28:12 crc kubenswrapper[5050]: I0131 06:28:12.636767 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c4wt\" (UniqueName: \"kubernetes.io/projected/6c579e6b-44b5-4e40-816c-ce48884954bb-kube-api-access-2c4wt\") pod \"6c579e6b-44b5-4e40-816c-ce48884954bb\" (UID: \"6c579e6b-44b5-4e40-816c-ce48884954bb\") " Jan 31 06:28:12 crc kubenswrapper[5050]: I0131 06:28:12.637016 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c579e6b-44b5-4e40-816c-ce48884954bb-catalog-content\") pod \"6c579e6b-44b5-4e40-816c-ce48884954bb\" (UID: \"6c579e6b-44b5-4e40-816c-ce48884954bb\") " Jan 31 06:28:12 crc kubenswrapper[5050]: I0131 06:28:12.637151 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c579e6b-44b5-4e40-816c-ce48884954bb-utilities" (OuterVolumeSpecName: "utilities") pod "6c579e6b-44b5-4e40-816c-ce48884954bb" (UID: "6c579e6b-44b5-4e40-816c-ce48884954bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:28:12 crc kubenswrapper[5050]: I0131 06:28:12.637480 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c579e6b-44b5-4e40-816c-ce48884954bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:28:12 crc kubenswrapper[5050]: I0131 06:28:12.642647 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c579e6b-44b5-4e40-816c-ce48884954bb-kube-api-access-2c4wt" (OuterVolumeSpecName: "kube-api-access-2c4wt") pod "6c579e6b-44b5-4e40-816c-ce48884954bb" (UID: "6c579e6b-44b5-4e40-816c-ce48884954bb"). InnerVolumeSpecName "kube-api-access-2c4wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:28:12 crc kubenswrapper[5050]: I0131 06:28:12.682853 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c579e6b-44b5-4e40-816c-ce48884954bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c579e6b-44b5-4e40-816c-ce48884954bb" (UID: "6c579e6b-44b5-4e40-816c-ce48884954bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:28:12 crc kubenswrapper[5050]: I0131 06:28:12.739597 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c4wt\" (UniqueName: \"kubernetes.io/projected/6c579e6b-44b5-4e40-816c-ce48884954bb-kube-api-access-2c4wt\") on node \"crc\" DevicePath \"\"" Jan 31 06:28:12 crc kubenswrapper[5050]: I0131 06:28:12.739637 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c579e6b-44b5-4e40-816c-ce48884954bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:28:13 crc kubenswrapper[5050]: I0131 06:28:13.162136 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktfxt" event={"ID":"6c579e6b-44b5-4e40-816c-ce48884954bb","Type":"ContainerDied","Data":"cff7b000b583b00e8c466991e3d7c85cf4a5ad79c26ef8b50798a8cee1b91c5b"} Jan 31 06:28:13 crc kubenswrapper[5050]: I0131 06:28:13.162237 5050 scope.go:117] "RemoveContainer" containerID="9c4e0049f580487658c511cfe0b05e1e5ee9b1834dc43b984ae85a70533deed4" Jan 31 06:28:13 crc kubenswrapper[5050]: I0131 06:28:13.162250 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktfxt" Jan 31 06:28:13 crc kubenswrapper[5050]: I0131 06:28:13.212415 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ktfxt"] Jan 31 06:28:13 crc kubenswrapper[5050]: I0131 06:28:13.221305 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ktfxt"] Jan 31 06:28:13 crc kubenswrapper[5050]: I0131 06:28:13.538668 5050 scope.go:117] "RemoveContainer" containerID="db442a4880ec71bd98a3df91cf7d706afb40653ac34e5f1f5f28c62010095c94" Jan 31 06:28:13 crc kubenswrapper[5050]: I0131 06:28:13.746575 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c579e6b-44b5-4e40-816c-ce48884954bb" path="/var/lib/kubelet/pods/6c579e6b-44b5-4e40-816c-ce48884954bb/volumes" Jan 31 06:28:13 crc kubenswrapper[5050]: I0131 06:28:13.927371 5050 scope.go:117] "RemoveContainer" containerID="a04a25dc8b9ddf5f140a28131300b128c792fc457bbdb495f471485128232a62" Jan 31 06:28:15 crc kubenswrapper[5050]: I0131 06:28:15.747111 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:28:25 crc kubenswrapper[5050]: I0131 06:28:25.768750 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4ee96caa-81d3-4f74-80ae-2f8b57a94d96" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 31 06:28:27 crc kubenswrapper[5050]: I0131 06:28:27.279248 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="7b9ed42c-b571-4eec-b45d-802eaa8cf8b7" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.156:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:28:29 crc kubenswrapper[5050]: I0131 06:28:29.202138 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="1115b898-f052-46bf-886a-489b12a35afb" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.236:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:28:29 crc kubenswrapper[5050]: I0131 06:28:29.248144 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="4914b8b7-fa26-4e58-85e1-c072305954cf" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.237:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:28:30 crc kubenswrapper[5050]: I0131 06:28:30.771721 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4ee96caa-81d3-4f74-80ae-2f8b57a94d96" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 31 06:28:30 crc kubenswrapper[5050]: I0131 06:28:30.771763 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4ee96caa-81d3-4f74-80ae-2f8b57a94d96" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 31 06:28:32 crc kubenswrapper[5050]: I0131 06:28:32.321264 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="7b9ed42c-b571-4eec-b45d-802eaa8cf8b7" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.156:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:28:34 crc kubenswrapper[5050]: I0131 06:28:34.244229 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="1115b898-f052-46bf-886a-489b12a35afb" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.236:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:28:34 crc kubenswrapper[5050]: I0131 06:28:34.291219 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="4914b8b7-fa26-4e58-85e1-c072305954cf" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.237:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:28:35 crc kubenswrapper[5050]: I0131 06:28:35.770521 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4ee96caa-81d3-4f74-80ae-2f8b57a94d96" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 31 06:28:35 crc kubenswrapper[5050]: I0131 06:28:35.770638 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 31 06:28:35 crc kubenswrapper[5050]: I0131 06:28:35.771575 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"ce3b9fae133b3b33b64bf3541b9b3de0f2e08ee194ce24cf11726765325664fb"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 31 06:28:35 crc kubenswrapper[5050]: I0131 06:28:35.771669 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4ee96caa-81d3-4f74-80ae-2f8b57a94d96" containerName="ceilometer-central-agent" containerID="cri-o://ce3b9fae133b3b33b64bf3541b9b3de0f2e08ee194ce24cf11726765325664fb" gracePeriod=30 Jan 31 06:28:37 crc kubenswrapper[5050]: I0131 06:28:37.364305 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="7b9ed42c-b571-4eec-b45d-802eaa8cf8b7" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.156:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:28:37 crc kubenswrapper[5050]: I0131 06:28:37.364859 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 06:28:37 crc kubenswrapper[5050]: I0131 06:28:37.366837 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"9c263d8cc26f132cf23a97bd553a2466371e14ffe39b4f6f9f01b72f1be5b20f"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 31 06:28:37 crc kubenswrapper[5050]: I0131 06:28:37.367034 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7b9ed42c-b571-4eec-b45d-802eaa8cf8b7" containerName="cinder-scheduler" containerID="cri-o://9c263d8cc26f132cf23a97bd553a2466371e14ffe39b4f6f9f01b72f1be5b20f" gracePeriod=30 Jan 31 06:28:39 crc kubenswrapper[5050]: I0131 06:28:39.286213 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="1115b898-f052-46bf-886a-489b12a35afb" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.236:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:28:39 crc kubenswrapper[5050]: I0131 06:28:39.286637 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 31 06:28:39 crc kubenswrapper[5050]: I0131 06:28:39.287581 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-volume" containerStatusID={"Type":"cri-o","ID":"ad34304d8e8b48d6ec1f501f34b21e4c8af08fd245f839eace505fbfe66394b3"} pod="openstack/cinder-volume-volume1-0" containerMessage="Container cinder-volume failed liveness probe, will be restarted" Jan 31 06:28:39 crc kubenswrapper[5050]: I0131 06:28:39.287635 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="1115b898-f052-46bf-886a-489b12a35afb" containerName="cinder-volume" containerID="cri-o://ad34304d8e8b48d6ec1f501f34b21e4c8af08fd245f839eace505fbfe66394b3" gracePeriod=30 Jan 31 06:28:39 crc kubenswrapper[5050]: I0131 06:28:39.333152 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="4914b8b7-fa26-4e58-85e1-c072305954cf" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.237:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:28:39 crc kubenswrapper[5050]: I0131 06:28:39.333240 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-backup-0" Jan 31 06:28:39 crc kubenswrapper[5050]: I0131 06:28:39.334185 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-backup" containerStatusID={"Type":"cri-o","ID":"6739971cadee1e2532bfcec4b882cc55cf26fbdc50489c2390797118b7df847f"} pod="openstack/cinder-backup-0" containerMessage="Container cinder-backup failed liveness probe, will be restarted" Jan 31 06:28:39 crc kubenswrapper[5050]: I0131 06:28:39.334257 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="4914b8b7-fa26-4e58-85e1-c072305954cf" containerName="cinder-backup" containerID="cri-o://6739971cadee1e2532bfcec4b882cc55cf26fbdc50489c2390797118b7df847f" gracePeriod=30 Jan 31 06:28:47 crc kubenswrapper[5050]: I0131 06:28:47.869322 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.1.3:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:28:55 crc kubenswrapper[5050]: I0131 06:28:55.569686 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs9rd" event={"ID":"e80febf1-fd85-4e73-baca-269c9ee21fa9","Type":"ContainerStarted","Data":"e5f6cc813211f71630a0ac792925fa773f1a5a5ff8afbe083452a37859923aaa"} Jan 31 06:28:59 crc kubenswrapper[5050]: I0131 06:28:59.019790 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="f9e0474f-b8df-4860-80ad-e852d72f4071" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.4:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:28:59 crc kubenswrapper[5050]: I0131 06:28:59.071275 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-bcp7s"] Jan 31 06:28:59 crc kubenswrapper[5050]: I0131 06:28:59.079637 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-bcp7s"] Jan 31 06:28:59 crc kubenswrapper[5050]: I0131 06:28:59.748499 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adc7d8ad-779c-4340-b51c-01a232f106b8" path="/var/lib/kubelet/pods/adc7d8ad-779c-4340-b51c-01a232f106b8/volumes" Jan 31 06:29:04 crc kubenswrapper[5050]: I0131 06:29:04.361432 5050 generic.go:334] "Generic (PLEG): container finished" podID="4ee96caa-81d3-4f74-80ae-2f8b57a94d96" containerID="ce3b9fae133b3b33b64bf3541b9b3de0f2e08ee194ce24cf11726765325664fb" exitCode=-1 Jan 31 06:29:04 crc kubenswrapper[5050]: I0131 06:29:04.361502 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ee96caa-81d3-4f74-80ae-2f8b57a94d96","Type":"ContainerDied","Data":"ce3b9fae133b3b33b64bf3541b9b3de0f2e08ee194ce24cf11726765325664fb"} Jan 31 06:29:06 crc kubenswrapper[5050]: I0131 06:29:06.707267 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-7bf64f7fd-jlmtk" podUID="f146da43-4dcb-46f5-a04b-2c5ef4b11fd8" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.140:5000/v3\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 06:29:06 crc kubenswrapper[5050]: I0131 06:29:06.707373 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/keystone-7bf64f7fd-jlmtk" podUID="f146da43-4dcb-46f5-a04b-2c5ef4b11fd8" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.140:5000/v3\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 06:29:07 crc kubenswrapper[5050]: I0131 06:29:07.911159 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.1.3:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:29:15 crc kubenswrapper[5050]: I0131 06:29:15.159320 5050 generic.go:334] "Generic (PLEG): container finished" podID="7b9ed42c-b571-4eec-b45d-802eaa8cf8b7" containerID="9c263d8cc26f132cf23a97bd553a2466371e14ffe39b4f6f9f01b72f1be5b20f" exitCode=-1 Jan 31 06:29:15 crc kubenswrapper[5050]: I0131 06:29:15.159486 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7","Type":"ContainerDied","Data":"9c263d8cc26f132cf23a97bd553a2466371e14ffe39b4f6f9f01b72f1be5b20f"} Jan 31 06:29:19 crc kubenswrapper[5050]: I0131 06:29:19.063521 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="f9e0474f-b8df-4860-80ad-e852d72f4071" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.4:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:29:19 crc kubenswrapper[5050]: I0131 06:29:19.911150 5050 generic.go:334] "Generic (PLEG): container finished" podID="1115b898-f052-46bf-886a-489b12a35afb" containerID="ad34304d8e8b48d6ec1f501f34b21e4c8af08fd245f839eace505fbfe66394b3" exitCode=-1 Jan 31 06:29:19 crc kubenswrapper[5050]: I0131 06:29:19.911219 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1115b898-f052-46bf-886a-489b12a35afb","Type":"ContainerDied","Data":"ad34304d8e8b48d6ec1f501f34b21e4c8af08fd245f839eace505fbfe66394b3"} Jan 31 06:29:23 crc kubenswrapper[5050]: I0131 06:29:23.663386 5050 generic.go:334] "Generic (PLEG): container finished" podID="4914b8b7-fa26-4e58-85e1-c072305954cf" containerID="6739971cadee1e2532bfcec4b882cc55cf26fbdc50489c2390797118b7df847f" exitCode=-1 Jan 31 06:29:23 crc kubenswrapper[5050]: I0131 06:29:23.663494 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4914b8b7-fa26-4e58-85e1-c072305954cf","Type":"ContainerDied","Data":"6739971cadee1e2532bfcec4b882cc55cf26fbdc50489c2390797118b7df847f"} Jan 31 06:29:27 crc kubenswrapper[5050]: I0131 06:29:27.952114 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.1.3:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:29:27 crc kubenswrapper[5050]: I0131 06:29:27.952714 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 31 06:29:27 crc kubenswrapper[5050]: I0131 06:29:27.954364 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-scheduler" containerStatusID={"Type":"cri-o","ID":"d68949d4244ebcd4cdc776eb5e19e51b0be9506e1e798f24581725f6cbca7e29"} pod="openstack/manila-scheduler-0" containerMessage="Container manila-scheduler failed liveness probe, will be restarted" Jan 31 06:29:27 crc kubenswrapper[5050]: I0131 06:29:27.954418 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c" containerName="manila-scheduler" containerID="cri-o://d68949d4244ebcd4cdc776eb5e19e51b0be9506e1e798f24581725f6cbca7e29" gracePeriod=30 Jan 31 06:29:30 crc kubenswrapper[5050]: I0131 06:29:30.771387 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4ee96caa-81d3-4f74-80ae-2f8b57a94d96" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 31 06:29:36 crc kubenswrapper[5050]: I0131 06:29:36.720105 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-7bf64f7fd-jlmtk" podUID="f146da43-4dcb-46f5-a04b-2c5ef4b11fd8" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.140:5000/v3\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 06:29:36 crc kubenswrapper[5050]: I0131 06:29:36.720153 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/keystone-7bf64f7fd-jlmtk" podUID="f146da43-4dcb-46f5-a04b-2c5ef4b11fd8" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.140:5000/v3\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 06:29:39 crc kubenswrapper[5050]: I0131 06:29:39.105187 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="f9e0474f-b8df-4860-80ad-e852d72f4071" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.4:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:29:39 crc kubenswrapper[5050]: I0131 06:29:39.105620 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 31 06:29:39 crc kubenswrapper[5050]: I0131 06:29:39.106689 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-share" containerStatusID={"Type":"cri-o","ID":"2b1b6274e442013a940d7d90cfb018475e6d28b1a7c1cdfb7fe2f55bf4686b15"} pod="openstack/manila-share-share1-0" containerMessage="Container manila-share failed liveness probe, will be restarted" Jan 31 06:29:39 crc kubenswrapper[5050]: I0131 06:29:39.106825 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="f9e0474f-b8df-4860-80ad-e852d72f4071" containerName="manila-share" containerID="cri-o://2b1b6274e442013a940d7d90cfb018475e6d28b1a7c1cdfb7fe2f55bf4686b15" gracePeriod=30 Jan 31 06:29:58 crc kubenswrapper[5050]: I0131 06:29:58.669557 5050 scope.go:117] "RemoveContainer" containerID="739610c0af77343b6d04ee89e1c7717b5105ab129c905b9680d604973762e52b" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.169292 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm"] Jan 31 06:30:00 crc kubenswrapper[5050]: E0131 06:30:00.169763 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c579e6b-44b5-4e40-816c-ce48884954bb" containerName="extract-content" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.169780 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c579e6b-44b5-4e40-816c-ce48884954bb" containerName="extract-content" Jan 31 06:30:00 crc kubenswrapper[5050]: E0131 06:30:00.169796 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c579e6b-44b5-4e40-816c-ce48884954bb" containerName="extract-utilities" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.169806 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c579e6b-44b5-4e40-816c-ce48884954bb" containerName="extract-utilities" Jan 31 06:30:00 crc kubenswrapper[5050]: E0131 06:30:00.169843 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c579e6b-44b5-4e40-816c-ce48884954bb" containerName="registry-server" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.169853 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c579e6b-44b5-4e40-816c-ce48884954bb" containerName="registry-server" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.170066 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c579e6b-44b5-4e40-816c-ce48884954bb" containerName="registry-server" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.170687 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.172810 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.173128 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.179486 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm"] Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.195899 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-secret-volume\") pod \"collect-profiles-29497350-rljcm\" (UID: \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.196210 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-config-volume\") pod \"collect-profiles-29497350-rljcm\" (UID: \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.196435 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mkpq\" (UniqueName: \"kubernetes.io/projected/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-kube-api-access-7mkpq\") pod \"collect-profiles-29497350-rljcm\" (UID: \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.298104 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mkpq\" (UniqueName: \"kubernetes.io/projected/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-kube-api-access-7mkpq\") pod \"collect-profiles-29497350-rljcm\" (UID: \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.298162 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-secret-volume\") pod \"collect-profiles-29497350-rljcm\" (UID: \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.298245 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-config-volume\") pod \"collect-profiles-29497350-rljcm\" (UID: \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.299268 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-config-volume\") pod \"collect-profiles-29497350-rljcm\" (UID: \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.304671 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-secret-volume\") pod \"collect-profiles-29497350-rljcm\" (UID: \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.316172 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mkpq\" (UniqueName: \"kubernetes.io/projected/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-kube-api-access-7mkpq\") pod \"collect-profiles-29497350-rljcm\" (UID: \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.502486 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.771428 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4ee96caa-81d3-4f74-80ae-2f8b57a94d96" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 31 06:30:00 crc kubenswrapper[5050]: I0131 06:30:00.972345 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm"] Jan 31 06:30:01 crc kubenswrapper[5050]: I0131 06:30:01.072232 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" event={"ID":"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a","Type":"ContainerStarted","Data":"3ffb2af2b80a383236b38da03c558f3acf64887a0b3fd417c5477b2d850ab15d"} Jan 31 06:30:05 crc kubenswrapper[5050]: I0131 06:30:05.815514 5050 generic.go:334] "Generic (PLEG): container finished" podID="ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c" containerID="d68949d4244ebcd4cdc776eb5e19e51b0be9506e1e798f24581725f6cbca7e29" exitCode=-1 Jan 31 06:30:05 crc kubenswrapper[5050]: I0131 06:30:05.815582 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c","Type":"ContainerDied","Data":"d68949d4244ebcd4cdc776eb5e19e51b0be9506e1e798f24581725f6cbca7e29"} Jan 31 06:30:06 crc kubenswrapper[5050]: I0131 06:30:06.730225 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-7bf64f7fd-jlmtk" podUID="f146da43-4dcb-46f5-a04b-2c5ef4b11fd8" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.140:5000/v3\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:30:06 crc kubenswrapper[5050]: I0131 06:30:06.730841 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/keystone-7bf64f7fd-jlmtk" podUID="f146da43-4dcb-46f5-a04b-2c5ef4b11fd8" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.140:5000/v3\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 06:30:06 crc kubenswrapper[5050]: I0131 06:30:06.731203 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 06:30:06 crc kubenswrapper[5050]: I0131 06:30:06.731243 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 06:30:06 crc kubenswrapper[5050]: I0131 06:30:06.731963 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="keystone-api" containerStatusID={"Type":"cri-o","ID":"64f0f348f165390ea1988d2662452a5ceb8269ed7be72c3aebb26815f1c246de"} pod="openstack/keystone-7bf64f7fd-jlmtk" containerMessage="Container keystone-api failed liveness probe, will be restarted" Jan 31 06:30:06 crc kubenswrapper[5050]: I0131 06:30:06.732003 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-7bf64f7fd-jlmtk" podUID="f146da43-4dcb-46f5-a04b-2c5ef4b11fd8" containerName="keystone-api" containerID="cri-o://64f0f348f165390ea1988d2662452a5ceb8269ed7be72c3aebb26815f1c246de" gracePeriod=30 Jan 31 06:30:06 crc kubenswrapper[5050]: I0131 06:30:06.765773 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-7bf64f7fd-jlmtk" podUID="f146da43-4dcb-46f5-a04b-2c5ef4b11fd8" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.140:5000/v3\": EOF" Jan 31 06:30:06 crc kubenswrapper[5050]: I0131 06:30:06.824324 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"b5befb432b2b57e36a9e4ab8b626a53f7a42f79cf6fc4e5b91dc28b5896bd449"} Jan 31 06:30:06 crc kubenswrapper[5050]: I0131 06:30:06.826069 5050 generic.go:334] "Generic (PLEG): container finished" podID="e80febf1-fd85-4e73-baca-269c9ee21fa9" containerID="e5f6cc813211f71630a0ac792925fa773f1a5a5ff8afbe083452a37859923aaa" exitCode=0 Jan 31 06:30:06 crc kubenswrapper[5050]: I0131 06:30:06.826117 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs9rd" event={"ID":"e80febf1-fd85-4e73-baca-269c9ee21fa9","Type":"ContainerDied","Data":"e5f6cc813211f71630a0ac792925fa773f1a5a5ff8afbe083452a37859923aaa"} Jan 31 06:30:09 crc kubenswrapper[5050]: I0131 06:30:09.046197 5050 patch_prober.go:28] interesting pod/router-default-5444994796-87m8f container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 06:30:09 crc kubenswrapper[5050]: I0131 06:30:09.046664 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-87m8f" podUID="e458d0aa-1771-4429-ba32-39cc22f3d638" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 06:30:09 crc kubenswrapper[5050]: I0131 06:30:09.852412 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 06:30:10 crc kubenswrapper[5050]: I0131 06:30:10.865199 5050 generic.go:334] "Generic (PLEG): container finished" podID="f9e0474f-b8df-4860-80ad-e852d72f4071" containerID="2b1b6274e442013a940d7d90cfb018475e6d28b1a7c1cdfb7fe2f55bf4686b15" exitCode=137 Jan 31 06:30:10 crc kubenswrapper[5050]: I0131 06:30:10.865224 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f9e0474f-b8df-4860-80ad-e852d72f4071","Type":"ContainerDied","Data":"2b1b6274e442013a940d7d90cfb018475e6d28b1a7c1cdfb7fe2f55bf4686b15"} Jan 31 06:30:10 crc kubenswrapper[5050]: I0131 06:30:10.867944 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" event={"ID":"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a","Type":"ContainerStarted","Data":"86cd18777c58676fff2c8150bf7e363ad5aa114a8a9c85699693c7296c04a9e8"} Jan 31 06:30:12 crc kubenswrapper[5050]: I0131 06:30:12.905583 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" podStartSLOduration=12.905565763 podStartE2EDuration="12.905565763s" podCreationTimestamp="2026-01-31 06:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 06:30:12.904085913 +0000 UTC m=+4137.953247509" watchObservedRunningTime="2026-01-31 06:30:12.905565763 +0000 UTC m=+4137.954727359" Jan 31 06:30:15 crc kubenswrapper[5050]: I0131 06:30:14.903062 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b9ed42c-b571-4eec-b45d-802eaa8cf8b7","Type":"ContainerStarted","Data":"10ed46d86e7489b96b3c3781fed3ce6991965f69bda059df643b03f130156959"} Jan 31 06:30:15 crc kubenswrapper[5050]: I0131 06:30:14.907351 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1115b898-f052-46bf-886a-489b12a35afb","Type":"ContainerStarted","Data":"49f0ba1aedaa732aafc074a3132e68728f482e2474a4ec222145edc91781f59e"} Jan 31 06:30:15 crc kubenswrapper[5050]: I0131 06:30:14.910534 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c","Type":"ContainerStarted","Data":"d4d6bdf3d6156afbc798be2dbd4512ef85db374f0593cb4d6e368fab3ee7dbc4"} Jan 31 06:30:15 crc kubenswrapper[5050]: I0131 06:30:14.913670 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4914b8b7-fa26-4e58-85e1-c072305954cf","Type":"ContainerStarted","Data":"1a3dd6745e0bbf0184ed628f2c852002c874ac628cdb3685158767bd4480f264"} Jan 31 06:30:15 crc kubenswrapper[5050]: I0131 06:30:15.357865 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-7bf64f7fd-jlmtk" podUID="f146da43-4dcb-46f5-a04b-2c5ef4b11fd8" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.140:5000/v3\": read tcp 10.217.0.2:43596->10.217.0.140:5000: read: connection reset by peer" Jan 31 06:30:15 crc kubenswrapper[5050]: I0131 06:30:15.923806 5050 generic.go:334] "Generic (PLEG): container finished" podID="f146da43-4dcb-46f5-a04b-2c5ef4b11fd8" containerID="64f0f348f165390ea1988d2662452a5ceb8269ed7be72c3aebb26815f1c246de" exitCode=0 Jan 31 06:30:15 crc kubenswrapper[5050]: I0131 06:30:15.923884 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bf64f7fd-jlmtk" event={"ID":"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8","Type":"ContainerDied","Data":"64f0f348f165390ea1988d2662452a5ceb8269ed7be72c3aebb26815f1c246de"} Jan 31 06:30:15 crc kubenswrapper[5050]: I0131 06:30:15.926475 5050 generic.go:334] "Generic (PLEG): container finished" podID="a0c20fe3-4d98-4c2b-8448-d0872bd6c80a" containerID="86cd18777c58676fff2c8150bf7e363ad5aa114a8a9c85699693c7296c04a9e8" exitCode=0 Jan 31 06:30:15 crc kubenswrapper[5050]: I0131 06:30:15.927626 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" event={"ID":"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a","Type":"ContainerDied","Data":"86cd18777c58676fff2c8150bf7e363ad5aa114a8a9c85699693c7296c04a9e8"} Jan 31 06:30:16 crc kubenswrapper[5050]: I0131 06:30:16.237370 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.298048 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.397138 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-config-volume\") pod \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\" (UID: \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\") " Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.397297 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-secret-volume\") pod \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\" (UID: \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\") " Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.397412 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mkpq\" (UniqueName: \"kubernetes.io/projected/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-kube-api-access-7mkpq\") pod \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\" (UID: \"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a\") " Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.397927 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-config-volume" (OuterVolumeSpecName: "config-volume") pod "a0c20fe3-4d98-4c2b-8448-d0872bd6c80a" (UID: "a0c20fe3-4d98-4c2b-8448-d0872bd6c80a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.403838 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-kube-api-access-7mkpq" (OuterVolumeSpecName: "kube-api-access-7mkpq") pod "a0c20fe3-4d98-4c2b-8448-d0872bd6c80a" (UID: "a0c20fe3-4d98-4c2b-8448-d0872bd6c80a"). InnerVolumeSpecName "kube-api-access-7mkpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.406078 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a0c20fe3-4d98-4c2b-8448-d0872bd6c80a" (UID: "a0c20fe3-4d98-4c2b-8448-d0872bd6c80a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.499697 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.499763 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mkpq\" (UniqueName: \"kubernetes.io/projected/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-kube-api-access-7mkpq\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.499776 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0c20fe3-4d98-4c2b-8448-d0872bd6c80a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.826723 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.978344 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" event={"ID":"a0c20fe3-4d98-4c2b-8448-d0872bd6c80a","Type":"ContainerDied","Data":"3ffb2af2b80a383236b38da03c558f3acf64887a0b3fd417c5477b2d850ab15d"} Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.978395 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ffb2af2b80a383236b38da03c558f3acf64887a0b3fd417c5477b2d850ab15d" Jan 31 06:30:17 crc kubenswrapper[5050]: I0131 06:30:17.978461 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497350-rljcm" Jan 31 06:30:18 crc kubenswrapper[5050]: I0131 06:30:18.016230 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f9e0474f-b8df-4860-80ad-e852d72f4071","Type":"ContainerStarted","Data":"7bc0d1ef5f4d0883ba6bcd5bf8422a811d2862e8a00efd7e57ea876247ceda07"} Jan 31 06:30:18 crc kubenswrapper[5050]: I0131 06:30:18.071320 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb"] Jan 31 06:30:18 crc kubenswrapper[5050]: I0131 06:30:18.088298 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497305-46hvb"] Jan 31 06:30:19 crc kubenswrapper[5050]: I0131 06:30:19.029690 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bf64f7fd-jlmtk" event={"ID":"f146da43-4dcb-46f5-a04b-2c5ef4b11fd8","Type":"ContainerStarted","Data":"d0a978402714a19c9683f6a063f2d9563b802b0ae92c4a02ba919bedce8d5c72"} Jan 31 06:30:19 crc kubenswrapper[5050]: I0131 06:30:19.030479 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 06:30:19 crc kubenswrapper[5050]: I0131 06:30:19.749847 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6091ffec-2f5f-4709-9984-69e94489c3b7" path="/var/lib/kubelet/pods/6091ffec-2f5f-4709-9984-69e94489c3b7/volumes" Jan 31 06:30:20 crc kubenswrapper[5050]: I0131 06:30:20.041681 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs9rd" event={"ID":"e80febf1-fd85-4e73-baca-269c9ee21fa9","Type":"ContainerStarted","Data":"7b7a26e927dcf275c2952fa33c7ffe3ad563801f2523766a363d2fbfee4249ea"} Jan 31 06:30:20 crc kubenswrapper[5050]: I0131 06:30:20.047029 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ee96caa-81d3-4f74-80ae-2f8b57a94d96","Type":"ContainerStarted","Data":"2e3d4425dbdb8158587b7ab4c648093c6ee5c5dde27d652629647d02bcfa462c"} Jan 31 06:30:20 crc kubenswrapper[5050]: I0131 06:30:20.071925 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rs9rd" podStartSLOduration=4.838727336 podStartE2EDuration="2m12.071898145s" podCreationTimestamp="2026-01-31 06:28:08 +0000 UTC" firstStartedPulling="2026-01-31 06:28:11.138293999 +0000 UTC m=+4016.187455605" lastFinishedPulling="2026-01-31 06:30:18.371464818 +0000 UTC m=+4143.420626414" observedRunningTime="2026-01-31 06:30:20.062240174 +0000 UTC m=+4145.111401770" watchObservedRunningTime="2026-01-31 06:30:20.071898145 +0000 UTC m=+4145.121059741" Jan 31 06:30:20 crc kubenswrapper[5050]: I0131 06:30:20.160531 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 31 06:30:20 crc kubenswrapper[5050]: I0131 06:30:20.169666 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 31 06:30:20 crc kubenswrapper[5050]: I0131 06:30:20.207295 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 31 06:30:20 crc kubenswrapper[5050]: I0131 06:30:20.226357 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 31 06:30:21 crc kubenswrapper[5050]: I0131 06:30:21.262322 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 06:30:24 crc kubenswrapper[5050]: I0131 06:30:24.100539 5050 generic.go:334] "Generic (PLEG): container finished" podID="35f7d3c2-6102-4838-ae18-e42d9d69e172" containerID="e49517ef8b685dfac24168e3c2e00eb735a38ada7332e31674932f0598b58171" exitCode=1 Jan 31 06:30:24 crc kubenswrapper[5050]: I0131 06:30:24.100588 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"35f7d3c2-6102-4838-ae18-e42d9d69e172","Type":"ContainerDied","Data":"e49517ef8b685dfac24168e3c2e00eb735a38ada7332e31674932f0598b58171"} Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.280480 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4ee96caa-81d3-4f74-80ae-2f8b57a94d96" containerName="ceilometer-notification-agent" probeResult="failure" output=< Jan 31 06:30:25 crc kubenswrapper[5050]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 31 06:30:25 crc kubenswrapper[5050]: > Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.280725 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.281453 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-notification-agent" containerStatusID={"Type":"cri-o","ID":"2a730e0fce413527feaac0526381e128cebb92f759392b8907970ba8374b0235"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-notification-agent failed liveness probe, will be restarted" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.281499 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4ee96caa-81d3-4f74-80ae-2f8b57a94d96" containerName="ceilometer-notification-agent" containerID="cri-o://2a730e0fce413527feaac0526381e128cebb92f759392b8907970ba8374b0235" gracePeriod=30 Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.626166 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.670273 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/35f7d3c2-6102-4838-ae18-e42d9d69e172-openstack-config\") pod \"35f7d3c2-6102-4838-ae18-e42d9d69e172\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.670370 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-ssh-key\") pod \"35f7d3c2-6102-4838-ae18-e42d9d69e172\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.670477 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-ca-certs\") pod \"35f7d3c2-6102-4838-ae18-e42d9d69e172\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.670528 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/35f7d3c2-6102-4838-ae18-e42d9d69e172-test-operator-ephemeral-workdir\") pod \"35f7d3c2-6102-4838-ae18-e42d9d69e172\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.670849 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-openstack-config-secret\") pod \"35f7d3c2-6102-4838-ae18-e42d9d69e172\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.670918 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35f7d3c2-6102-4838-ae18-e42d9d69e172-config-data\") pod \"35f7d3c2-6102-4838-ae18-e42d9d69e172\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.670986 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/35f7d3c2-6102-4838-ae18-e42d9d69e172-test-operator-ephemeral-temporary\") pod \"35f7d3c2-6102-4838-ae18-e42d9d69e172\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.671085 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpjgf\" (UniqueName: \"kubernetes.io/projected/35f7d3c2-6102-4838-ae18-e42d9d69e172-kube-api-access-xpjgf\") pod \"35f7d3c2-6102-4838-ae18-e42d9d69e172\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.671176 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"35f7d3c2-6102-4838-ae18-e42d9d69e172\" (UID: \"35f7d3c2-6102-4838-ae18-e42d9d69e172\") " Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.672004 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f7d3c2-6102-4838-ae18-e42d9d69e172-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "35f7d3c2-6102-4838-ae18-e42d9d69e172" (UID: "35f7d3c2-6102-4838-ae18-e42d9d69e172"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.672140 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f7d3c2-6102-4838-ae18-e42d9d69e172-config-data" (OuterVolumeSpecName: "config-data") pod "35f7d3c2-6102-4838-ae18-e42d9d69e172" (UID: "35f7d3c2-6102-4838-ae18-e42d9d69e172"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.672775 5050 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/35f7d3c2-6102-4838-ae18-e42d9d69e172-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.672804 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35f7d3c2-6102-4838-ae18-e42d9d69e172-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.678196 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f7d3c2-6102-4838-ae18-e42d9d69e172-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "35f7d3c2-6102-4838-ae18-e42d9d69e172" (UID: "35f7d3c2-6102-4838-ae18-e42d9d69e172"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.680131 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f7d3c2-6102-4838-ae18-e42d9d69e172-kube-api-access-xpjgf" (OuterVolumeSpecName: "kube-api-access-xpjgf") pod "35f7d3c2-6102-4838-ae18-e42d9d69e172" (UID: "35f7d3c2-6102-4838-ae18-e42d9d69e172"). InnerVolumeSpecName "kube-api-access-xpjgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.689367 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "35f7d3c2-6102-4838-ae18-e42d9d69e172" (UID: "35f7d3c2-6102-4838-ae18-e42d9d69e172"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.704531 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "35f7d3c2-6102-4838-ae18-e42d9d69e172" (UID: "35f7d3c2-6102-4838-ae18-e42d9d69e172"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.714166 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "35f7d3c2-6102-4838-ae18-e42d9d69e172" (UID: "35f7d3c2-6102-4838-ae18-e42d9d69e172"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.730202 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "35f7d3c2-6102-4838-ae18-e42d9d69e172" (UID: "35f7d3c2-6102-4838-ae18-e42d9d69e172"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.744539 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f7d3c2-6102-4838-ae18-e42d9d69e172-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "35f7d3c2-6102-4838-ae18-e42d9d69e172" (UID: "35f7d3c2-6102-4838-ae18-e42d9d69e172"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.774381 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/35f7d3c2-6102-4838-ae18-e42d9d69e172-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.774419 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.774429 5050 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.774455 5050 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/35f7d3c2-6102-4838-ae18-e42d9d69e172-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.774467 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/35f7d3c2-6102-4838-ae18-e42d9d69e172-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.774477 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpjgf\" (UniqueName: \"kubernetes.io/projected/35f7d3c2-6102-4838-ae18-e42d9d69e172-kube-api-access-xpjgf\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.774507 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.794281 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 31 06:30:25 crc kubenswrapper[5050]: I0131 06:30:25.876572 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:26 crc kubenswrapper[5050]: I0131 06:30:26.120730 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"35f7d3c2-6102-4838-ae18-e42d9d69e172","Type":"ContainerDied","Data":"e9951cabb094d1e1b786b36baf03b32a266763012bef9983787653306f7e8deb"} Jan 31 06:30:26 crc kubenswrapper[5050]: I0131 06:30:26.121057 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9951cabb094d1e1b786b36baf03b32a266763012bef9983787653306f7e8deb" Jan 31 06:30:26 crc kubenswrapper[5050]: I0131 06:30:26.120793 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 06:30:28 crc kubenswrapper[5050]: I0131 06:30:28.861216 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:30:28 crc kubenswrapper[5050]: I0131 06:30:28.861602 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:30:28 crc kubenswrapper[5050]: I0131 06:30:28.915099 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:30:28 crc kubenswrapper[5050]: I0131 06:30:28.976037 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 31 06:30:29 crc kubenswrapper[5050]: I0131 06:30:29.504106 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 31 06:30:29 crc kubenswrapper[5050]: I0131 06:30:29.719203 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:30:29 crc kubenswrapper[5050]: I0131 06:30:29.771877 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rs9rd"] Jan 31 06:30:30 crc kubenswrapper[5050]: I0131 06:30:30.713757 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 31 06:30:31 crc kubenswrapper[5050]: I0131 06:30:31.212664 5050 generic.go:334] "Generic (PLEG): container finished" podID="4ee96caa-81d3-4f74-80ae-2f8b57a94d96" containerID="2a730e0fce413527feaac0526381e128cebb92f759392b8907970ba8374b0235" exitCode=0 Jan 31 06:30:31 crc kubenswrapper[5050]: I0131 06:30:31.212704 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ee96caa-81d3-4f74-80ae-2f8b57a94d96","Type":"ContainerDied","Data":"2a730e0fce413527feaac0526381e128cebb92f759392b8907970ba8374b0235"} Jan 31 06:30:31 crc kubenswrapper[5050]: I0131 06:30:31.212939 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rs9rd" podUID="e80febf1-fd85-4e73-baca-269c9ee21fa9" containerName="registry-server" containerID="cri-o://7b7a26e927dcf275c2952fa33c7ffe3ad563801f2523766a363d2fbfee4249ea" gracePeriod=2 Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.559528 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 06:30:32 crc kubenswrapper[5050]: E0131 06:30:32.561447 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f7d3c2-6102-4838-ae18-e42d9d69e172" containerName="tempest-tests-tempest-tests-runner" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.561471 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f7d3c2-6102-4838-ae18-e42d9d69e172" containerName="tempest-tests-tempest-tests-runner" Jan 31 06:30:32 crc kubenswrapper[5050]: E0131 06:30:32.561510 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c20fe3-4d98-4c2b-8448-d0872bd6c80a" containerName="collect-profiles" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.561521 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c20fe3-4d98-4c2b-8448-d0872bd6c80a" containerName="collect-profiles" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.561769 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f7d3c2-6102-4838-ae18-e42d9d69e172" containerName="tempest-tests-tempest-tests-runner" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.561793 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c20fe3-4d98-4c2b-8448-d0872bd6c80a" containerName="collect-profiles" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.562615 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.565097 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-g8z7s" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.573665 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.666311 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.666441 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d26t6\" (UniqueName: \"kubernetes.io/projected/0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b-kube-api-access-d26t6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.767812 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.768172 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d26t6\" (UniqueName: \"kubernetes.io/projected/0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b-kube-api-access-d26t6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.768244 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.799249 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d26t6\" (UniqueName: \"kubernetes.io/projected/0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b-kube-api-access-d26t6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.801369 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 06:30:32 crc kubenswrapper[5050]: I0131 06:30:32.910332 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 06:30:33 crc kubenswrapper[5050]: I0131 06:30:33.241280 5050 generic.go:334] "Generic (PLEG): container finished" podID="e80febf1-fd85-4e73-baca-269c9ee21fa9" containerID="7b7a26e927dcf275c2952fa33c7ffe3ad563801f2523766a363d2fbfee4249ea" exitCode=0 Jan 31 06:30:33 crc kubenswrapper[5050]: I0131 06:30:33.241349 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs9rd" event={"ID":"e80febf1-fd85-4e73-baca-269c9ee21fa9","Type":"ContainerDied","Data":"7b7a26e927dcf275c2952fa33c7ffe3ad563801f2523766a363d2fbfee4249ea"} Jan 31 06:30:33 crc kubenswrapper[5050]: I0131 06:30:33.368399 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 06:30:33 crc kubenswrapper[5050]: W0131 06:30:33.371107 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d50c6e8_cdc2_4d84_ae91_a7d5d5f3290b.slice/crio-bb4b538daa4d4a14e4d059ab8bb902f04fc08d2b36b2cd28a4908d17401ff733 WatchSource:0}: Error finding container bb4b538daa4d4a14e4d059ab8bb902f04fc08d2b36b2cd28a4908d17401ff733: Status 404 returned error can't find the container with id bb4b538daa4d4a14e4d059ab8bb902f04fc08d2b36b2cd28a4908d17401ff733 Jan 31 06:30:34 crc kubenswrapper[5050]: I0131 06:30:34.251840 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b","Type":"ContainerStarted","Data":"bb4b538daa4d4a14e4d059ab8bb902f04fc08d2b36b2cd28a4908d17401ff733"} Jan 31 06:30:34 crc kubenswrapper[5050]: I0131 06:30:34.927980 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:30:35 crc kubenswrapper[5050]: I0131 06:30:35.011944 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e80febf1-fd85-4e73-baca-269c9ee21fa9-catalog-content\") pod \"e80febf1-fd85-4e73-baca-269c9ee21fa9\" (UID: \"e80febf1-fd85-4e73-baca-269c9ee21fa9\") " Jan 31 06:30:35 crc kubenswrapper[5050]: I0131 06:30:35.012051 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e80febf1-fd85-4e73-baca-269c9ee21fa9-utilities\") pod \"e80febf1-fd85-4e73-baca-269c9ee21fa9\" (UID: \"e80febf1-fd85-4e73-baca-269c9ee21fa9\") " Jan 31 06:30:35 crc kubenswrapper[5050]: I0131 06:30:35.012123 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6qmb\" (UniqueName: \"kubernetes.io/projected/e80febf1-fd85-4e73-baca-269c9ee21fa9-kube-api-access-t6qmb\") pod \"e80febf1-fd85-4e73-baca-269c9ee21fa9\" (UID: \"e80febf1-fd85-4e73-baca-269c9ee21fa9\") " Jan 31 06:30:35 crc kubenswrapper[5050]: I0131 06:30:35.014603 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e80febf1-fd85-4e73-baca-269c9ee21fa9-utilities" (OuterVolumeSpecName: "utilities") pod "e80febf1-fd85-4e73-baca-269c9ee21fa9" (UID: "e80febf1-fd85-4e73-baca-269c9ee21fa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:30:35 crc kubenswrapper[5050]: I0131 06:30:35.020494 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e80febf1-fd85-4e73-baca-269c9ee21fa9-kube-api-access-t6qmb" (OuterVolumeSpecName: "kube-api-access-t6qmb") pod "e80febf1-fd85-4e73-baca-269c9ee21fa9" (UID: "e80febf1-fd85-4e73-baca-269c9ee21fa9"). InnerVolumeSpecName "kube-api-access-t6qmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:30:35 crc kubenswrapper[5050]: I0131 06:30:35.117261 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6qmb\" (UniqueName: \"kubernetes.io/projected/e80febf1-fd85-4e73-baca-269c9ee21fa9-kube-api-access-t6qmb\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:35 crc kubenswrapper[5050]: I0131 06:30:35.117311 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e80febf1-fd85-4e73-baca-269c9ee21fa9-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:35 crc kubenswrapper[5050]: I0131 06:30:35.263721 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs9rd" event={"ID":"e80febf1-fd85-4e73-baca-269c9ee21fa9","Type":"ContainerDied","Data":"5d14762567c0958edd587d8574b7f0812c78a59855bfccd81af724678c92e281"} Jan 31 06:30:35 crc kubenswrapper[5050]: I0131 06:30:35.263860 5050 scope.go:117] "RemoveContainer" containerID="7b7a26e927dcf275c2952fa33c7ffe3ad563801f2523766a363d2fbfee4249ea" Jan 31 06:30:35 crc kubenswrapper[5050]: I0131 06:30:35.263775 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rs9rd" Jan 31 06:30:35 crc kubenswrapper[5050]: I0131 06:30:35.285119 5050 scope.go:117] "RemoveContainer" containerID="e5f6cc813211f71630a0ac792925fa773f1a5a5ff8afbe083452a37859923aaa" Jan 31 06:30:35 crc kubenswrapper[5050]: I0131 06:30:35.305818 5050 scope.go:117] "RemoveContainer" containerID="a12d0773437bd638188d3ab53fcb965830dceae678db4a4a8f7ff3e0065f79ea" Jan 31 06:30:37 crc kubenswrapper[5050]: I0131 06:30:37.052188 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e80febf1-fd85-4e73-baca-269c9ee21fa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e80febf1-fd85-4e73-baca-269c9ee21fa9" (UID: "e80febf1-fd85-4e73-baca-269c9ee21fa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:30:37 crc kubenswrapper[5050]: I0131 06:30:37.057348 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e80febf1-fd85-4e73-baca-269c9ee21fa9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:30:37 crc kubenswrapper[5050]: I0131 06:30:37.100370 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rs9rd"] Jan 31 06:30:37 crc kubenswrapper[5050]: I0131 06:30:37.108685 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rs9rd"] Jan 31 06:30:37 crc kubenswrapper[5050]: I0131 06:30:37.748876 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e80febf1-fd85-4e73-baca-269c9ee21fa9" path="/var/lib/kubelet/pods/e80febf1-fd85-4e73-baca-269c9ee21fa9/volumes" Jan 31 06:30:40 crc kubenswrapper[5050]: I0131 06:30:40.325665 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ee96caa-81d3-4f74-80ae-2f8b57a94d96","Type":"ContainerStarted","Data":"dea620e5f80b49015ddd85115ef162825aa4734be08804b3057ca1dfb965443c"} Jan 31 06:30:40 crc kubenswrapper[5050]: I0131 06:30:40.731666 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7bf64f7fd-jlmtk" Jan 31 06:30:43 crc kubenswrapper[5050]: I0131 06:30:43.360976 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b","Type":"ContainerStarted","Data":"2965e2ec6ea96a7592230749d8debfc85bb048f9d3364be9d2712f36b5e023d0"} Jan 31 06:30:43 crc kubenswrapper[5050]: I0131 06:30:43.379104 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.365191088 podStartE2EDuration="11.379087776s" podCreationTimestamp="2026-01-31 06:30:32 +0000 UTC" firstStartedPulling="2026-01-31 06:30:33.374008903 +0000 UTC m=+4158.423170499" lastFinishedPulling="2026-01-31 06:30:42.387905591 +0000 UTC m=+4167.437067187" observedRunningTime="2026-01-31 06:30:43.372203411 +0000 UTC m=+4168.421365007" watchObservedRunningTime="2026-01-31 06:30:43.379087776 +0000 UTC m=+4168.428249382" Jan 31 06:30:58 crc kubenswrapper[5050]: I0131 06:30:58.838655 5050 scope.go:117] "RemoveContainer" containerID="fa359172c8f745b63d36754c025350a62f3a52f99c271a6b8857ae5260865b7f" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.330540 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qbr8s/must-gather-wqzms"] Jan 31 06:31:35 crc kubenswrapper[5050]: E0131 06:31:35.331651 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80febf1-fd85-4e73-baca-269c9ee21fa9" containerName="extract-content" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.331671 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80febf1-fd85-4e73-baca-269c9ee21fa9" containerName="extract-content" Jan 31 06:31:35 crc kubenswrapper[5050]: E0131 06:31:35.331700 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80febf1-fd85-4e73-baca-269c9ee21fa9" containerName="extract-utilities" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.331709 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80febf1-fd85-4e73-baca-269c9ee21fa9" containerName="extract-utilities" Jan 31 06:31:35 crc kubenswrapper[5050]: E0131 06:31:35.331735 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80febf1-fd85-4e73-baca-269c9ee21fa9" containerName="registry-server" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.331743 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80febf1-fd85-4e73-baca-269c9ee21fa9" containerName="registry-server" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.331971 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80febf1-fd85-4e73-baca-269c9ee21fa9" containerName="registry-server" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.333183 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qbr8s/must-gather-wqzms" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.336643 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qbr8s"/"kube-root-ca.crt" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.337529 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qbr8s"/"openshift-service-ca.crt" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.338192 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qbr8s"/"default-dockercfg-jp8z2" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.339815 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qbr8s/must-gather-wqzms"] Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.524185 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e8f15e4-442f-4a10-92fa-c0095b4ca407-must-gather-output\") pod \"must-gather-wqzms\" (UID: \"4e8f15e4-442f-4a10-92fa-c0095b4ca407\") " pod="openshift-must-gather-qbr8s/must-gather-wqzms" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.524414 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7dq2\" (UniqueName: \"kubernetes.io/projected/4e8f15e4-442f-4a10-92fa-c0095b4ca407-kube-api-access-n7dq2\") pod \"must-gather-wqzms\" (UID: \"4e8f15e4-442f-4a10-92fa-c0095b4ca407\") " pod="openshift-must-gather-qbr8s/must-gather-wqzms" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.625818 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e8f15e4-442f-4a10-92fa-c0095b4ca407-must-gather-output\") pod \"must-gather-wqzms\" (UID: \"4e8f15e4-442f-4a10-92fa-c0095b4ca407\") " pod="openshift-must-gather-qbr8s/must-gather-wqzms" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.626022 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7dq2\" (UniqueName: \"kubernetes.io/projected/4e8f15e4-442f-4a10-92fa-c0095b4ca407-kube-api-access-n7dq2\") pod \"must-gather-wqzms\" (UID: \"4e8f15e4-442f-4a10-92fa-c0095b4ca407\") " pod="openshift-must-gather-qbr8s/must-gather-wqzms" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.626280 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e8f15e4-442f-4a10-92fa-c0095b4ca407-must-gather-output\") pod \"must-gather-wqzms\" (UID: \"4e8f15e4-442f-4a10-92fa-c0095b4ca407\") " pod="openshift-must-gather-qbr8s/must-gather-wqzms" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.867757 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7dq2\" (UniqueName: \"kubernetes.io/projected/4e8f15e4-442f-4a10-92fa-c0095b4ca407-kube-api-access-n7dq2\") pod \"must-gather-wqzms\" (UID: \"4e8f15e4-442f-4a10-92fa-c0095b4ca407\") " pod="openshift-must-gather-qbr8s/must-gather-wqzms" Jan 31 06:31:35 crc kubenswrapper[5050]: I0131 06:31:35.952974 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qbr8s/must-gather-wqzms" Jan 31 06:31:36 crc kubenswrapper[5050]: I0131 06:31:36.424781 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qbr8s/must-gather-wqzms"] Jan 31 06:31:36 crc kubenswrapper[5050]: I0131 06:31:36.861514 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qbr8s/must-gather-wqzms" event={"ID":"4e8f15e4-442f-4a10-92fa-c0095b4ca407","Type":"ContainerStarted","Data":"2bb2177c67ef16cf3bb7f9a9373a8095e6cf5b54d9689e0f79f43d46ae24589d"} Jan 31 06:31:43 crc kubenswrapper[5050]: I0131 06:31:43.938061 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qbr8s/must-gather-wqzms" event={"ID":"4e8f15e4-442f-4a10-92fa-c0095b4ca407","Type":"ContainerStarted","Data":"2ff71f521ccc3fa491a457525b0c747c39fdc6c1a6a81c6dfb54fc3f74169b38"} Jan 31 06:31:43 crc kubenswrapper[5050]: I0131 06:31:43.938647 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qbr8s/must-gather-wqzms" event={"ID":"4e8f15e4-442f-4a10-92fa-c0095b4ca407","Type":"ContainerStarted","Data":"d8ab1a78fe4b494ef6d0c074511459a77b59dc2f123902b2f92bfbeb7b3c93d9"} Jan 31 06:31:43 crc kubenswrapper[5050]: I0131 06:31:43.956459 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qbr8s/must-gather-wqzms" podStartSLOduration=2.675866553 podStartE2EDuration="8.956434464s" podCreationTimestamp="2026-01-31 06:31:35 +0000 UTC" firstStartedPulling="2026-01-31 06:31:36.429533175 +0000 UTC m=+4221.478694771" lastFinishedPulling="2026-01-31 06:31:42.710101086 +0000 UTC m=+4227.759262682" observedRunningTime="2026-01-31 06:31:43.95147714 +0000 UTC m=+4229.000638736" watchObservedRunningTime="2026-01-31 06:31:43.956434464 +0000 UTC m=+4229.005596060" Jan 31 06:31:48 crc kubenswrapper[5050]: I0131 06:31:48.524801 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qbr8s/crc-debug-j9xgp"] Jan 31 06:31:48 crc kubenswrapper[5050]: I0131 06:31:48.526703 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" Jan 31 06:31:48 crc kubenswrapper[5050]: I0131 06:31:48.636693 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvjw7\" (UniqueName: \"kubernetes.io/projected/24d8a0b0-59b1-4f7b-b722-432fd45f8702-kube-api-access-pvjw7\") pod \"crc-debug-j9xgp\" (UID: \"24d8a0b0-59b1-4f7b-b722-432fd45f8702\") " pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" Jan 31 06:31:48 crc kubenswrapper[5050]: I0131 06:31:48.637354 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24d8a0b0-59b1-4f7b-b722-432fd45f8702-host\") pod \"crc-debug-j9xgp\" (UID: \"24d8a0b0-59b1-4f7b-b722-432fd45f8702\") " pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" Jan 31 06:31:48 crc kubenswrapper[5050]: I0131 06:31:48.739429 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvjw7\" (UniqueName: \"kubernetes.io/projected/24d8a0b0-59b1-4f7b-b722-432fd45f8702-kube-api-access-pvjw7\") pod \"crc-debug-j9xgp\" (UID: \"24d8a0b0-59b1-4f7b-b722-432fd45f8702\") " pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" Jan 31 06:31:48 crc kubenswrapper[5050]: I0131 06:31:48.739493 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24d8a0b0-59b1-4f7b-b722-432fd45f8702-host\") pod \"crc-debug-j9xgp\" (UID: \"24d8a0b0-59b1-4f7b-b722-432fd45f8702\") " pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" Jan 31 06:31:48 crc kubenswrapper[5050]: I0131 06:31:48.739606 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24d8a0b0-59b1-4f7b-b722-432fd45f8702-host\") pod \"crc-debug-j9xgp\" (UID: \"24d8a0b0-59b1-4f7b-b722-432fd45f8702\") " pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" Jan 31 06:31:48 crc kubenswrapper[5050]: I0131 06:31:48.757537 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvjw7\" (UniqueName: \"kubernetes.io/projected/24d8a0b0-59b1-4f7b-b722-432fd45f8702-kube-api-access-pvjw7\") pod \"crc-debug-j9xgp\" (UID: \"24d8a0b0-59b1-4f7b-b722-432fd45f8702\") " pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" Jan 31 06:31:48 crc kubenswrapper[5050]: I0131 06:31:48.878406 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" Jan 31 06:31:48 crc kubenswrapper[5050]: W0131 06:31:48.922636 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d8a0b0_59b1_4f7b_b722_432fd45f8702.slice/crio-7bb302e08f8de21dc2e745524ad6a48f642a68eebf8831bcdd80c3ae7d4ec379 WatchSource:0}: Error finding container 7bb302e08f8de21dc2e745524ad6a48f642a68eebf8831bcdd80c3ae7d4ec379: Status 404 returned error can't find the container with id 7bb302e08f8de21dc2e745524ad6a48f642a68eebf8831bcdd80c3ae7d4ec379 Jan 31 06:31:48 crc kubenswrapper[5050]: I0131 06:31:48.986593 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" event={"ID":"24d8a0b0-59b1-4f7b-b722-432fd45f8702","Type":"ContainerStarted","Data":"7bb302e08f8de21dc2e745524ad6a48f642a68eebf8831bcdd80c3ae7d4ec379"} Jan 31 06:32:01 crc kubenswrapper[5050]: I0131 06:32:01.090338 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" event={"ID":"24d8a0b0-59b1-4f7b-b722-432fd45f8702","Type":"ContainerStarted","Data":"7e9b5d983a252475fa070d73cfe6a7ccb66b4f3df586bdea6666e078bb3c4233"} Jan 31 06:32:01 crc kubenswrapper[5050]: I0131 06:32:01.110978 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" podStartSLOduration=2.185904676 podStartE2EDuration="13.110947473s" podCreationTimestamp="2026-01-31 06:31:48 +0000 UTC" firstStartedPulling="2026-01-31 06:31:48.925382005 +0000 UTC m=+4233.974543601" lastFinishedPulling="2026-01-31 06:31:59.850424802 +0000 UTC m=+4244.899586398" observedRunningTime="2026-01-31 06:32:01.110774438 +0000 UTC m=+4246.159936034" watchObservedRunningTime="2026-01-31 06:32:01.110947473 +0000 UTC m=+4246.160109069" Jan 31 06:32:09 crc kubenswrapper[5050]: I0131 06:32:09.018641 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:32:09 crc kubenswrapper[5050]: I0131 06:32:09.019075 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:32:39 crc kubenswrapper[5050]: I0131 06:32:39.018103 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:32:39 crc kubenswrapper[5050]: I0131 06:32:39.018686 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.218712 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4wvmk"] Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.221210 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.234567 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4wvmk"] Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.275161 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvb5n\" (UniqueName: \"kubernetes.io/projected/feacdde8-2174-4336-869c-8c7b6c7b3542-kube-api-access-cvb5n\") pod \"redhat-marketplace-4wvmk\" (UID: \"feacdde8-2174-4336-869c-8c7b6c7b3542\") " pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.275230 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feacdde8-2174-4336-869c-8c7b6c7b3542-utilities\") pod \"redhat-marketplace-4wvmk\" (UID: \"feacdde8-2174-4336-869c-8c7b6c7b3542\") " pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.275302 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feacdde8-2174-4336-869c-8c7b6c7b3542-catalog-content\") pod \"redhat-marketplace-4wvmk\" (UID: \"feacdde8-2174-4336-869c-8c7b6c7b3542\") " pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.376870 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feacdde8-2174-4336-869c-8c7b6c7b3542-catalog-content\") pod \"redhat-marketplace-4wvmk\" (UID: \"feacdde8-2174-4336-869c-8c7b6c7b3542\") " pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.377040 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvb5n\" (UniqueName: \"kubernetes.io/projected/feacdde8-2174-4336-869c-8c7b6c7b3542-kube-api-access-cvb5n\") pod \"redhat-marketplace-4wvmk\" (UID: \"feacdde8-2174-4336-869c-8c7b6c7b3542\") " pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.377090 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feacdde8-2174-4336-869c-8c7b6c7b3542-utilities\") pod \"redhat-marketplace-4wvmk\" (UID: \"feacdde8-2174-4336-869c-8c7b6c7b3542\") " pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.377592 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feacdde8-2174-4336-869c-8c7b6c7b3542-catalog-content\") pod \"redhat-marketplace-4wvmk\" (UID: \"feacdde8-2174-4336-869c-8c7b6c7b3542\") " pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.377612 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feacdde8-2174-4336-869c-8c7b6c7b3542-utilities\") pod \"redhat-marketplace-4wvmk\" (UID: \"feacdde8-2174-4336-869c-8c7b6c7b3542\") " pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.397062 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvb5n\" (UniqueName: \"kubernetes.io/projected/feacdde8-2174-4336-869c-8c7b6c7b3542-kube-api-access-cvb5n\") pod \"redhat-marketplace-4wvmk\" (UID: \"feacdde8-2174-4336-869c-8c7b6c7b3542\") " pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:46 crc kubenswrapper[5050]: I0131 06:32:46.543083 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:47 crc kubenswrapper[5050]: I0131 06:32:47.059640 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4wvmk"] Jan 31 06:32:47 crc kubenswrapper[5050]: I0131 06:32:47.556254 5050 generic.go:334] "Generic (PLEG): container finished" podID="24d8a0b0-59b1-4f7b-b722-432fd45f8702" containerID="7e9b5d983a252475fa070d73cfe6a7ccb66b4f3df586bdea6666e078bb3c4233" exitCode=0 Jan 31 06:32:47 crc kubenswrapper[5050]: I0131 06:32:47.556357 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" event={"ID":"24d8a0b0-59b1-4f7b-b722-432fd45f8702","Type":"ContainerDied","Data":"7e9b5d983a252475fa070d73cfe6a7ccb66b4f3df586bdea6666e078bb3c4233"} Jan 31 06:32:47 crc kubenswrapper[5050]: I0131 06:32:47.558390 5050 generic.go:334] "Generic (PLEG): container finished" podID="feacdde8-2174-4336-869c-8c7b6c7b3542" containerID="3d2610f7aee5f54c494f280e4e46edac4db23d8148cf92b6171350795e4d0279" exitCode=0 Jan 31 06:32:47 crc kubenswrapper[5050]: I0131 06:32:47.558421 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4wvmk" event={"ID":"feacdde8-2174-4336-869c-8c7b6c7b3542","Type":"ContainerDied","Data":"3d2610f7aee5f54c494f280e4e46edac4db23d8148cf92b6171350795e4d0279"} Jan 31 06:32:47 crc kubenswrapper[5050]: I0131 06:32:47.558451 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4wvmk" event={"ID":"feacdde8-2174-4336-869c-8c7b6c7b3542","Type":"ContainerStarted","Data":"a3fb5781ddbe7627a23199ee788d0dcbc8e6cf350fe9319860c95b8b27b8e652"} Jan 31 06:32:48 crc kubenswrapper[5050]: I0131 06:32:48.669803 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" Jan 31 06:32:48 crc kubenswrapper[5050]: I0131 06:32:48.706318 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qbr8s/crc-debug-j9xgp"] Jan 31 06:32:48 crc kubenswrapper[5050]: I0131 06:32:48.715492 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qbr8s/crc-debug-j9xgp"] Jan 31 06:32:48 crc kubenswrapper[5050]: I0131 06:32:48.727116 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24d8a0b0-59b1-4f7b-b722-432fd45f8702-host\") pod \"24d8a0b0-59b1-4f7b-b722-432fd45f8702\" (UID: \"24d8a0b0-59b1-4f7b-b722-432fd45f8702\") " Jan 31 06:32:48 crc kubenswrapper[5050]: I0131 06:32:48.727184 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24d8a0b0-59b1-4f7b-b722-432fd45f8702-host" (OuterVolumeSpecName: "host") pod "24d8a0b0-59b1-4f7b-b722-432fd45f8702" (UID: "24d8a0b0-59b1-4f7b-b722-432fd45f8702"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:32:48 crc kubenswrapper[5050]: I0131 06:32:48.727200 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvjw7\" (UniqueName: \"kubernetes.io/projected/24d8a0b0-59b1-4f7b-b722-432fd45f8702-kube-api-access-pvjw7\") pod \"24d8a0b0-59b1-4f7b-b722-432fd45f8702\" (UID: \"24d8a0b0-59b1-4f7b-b722-432fd45f8702\") " Jan 31 06:32:48 crc kubenswrapper[5050]: I0131 06:32:48.728103 5050 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/24d8a0b0-59b1-4f7b-b722-432fd45f8702-host\") on node \"crc\" DevicePath \"\"" Jan 31 06:32:48 crc kubenswrapper[5050]: I0131 06:32:48.738285 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d8a0b0-59b1-4f7b-b722-432fd45f8702-kube-api-access-pvjw7" (OuterVolumeSpecName: "kube-api-access-pvjw7") pod "24d8a0b0-59b1-4f7b-b722-432fd45f8702" (UID: "24d8a0b0-59b1-4f7b-b722-432fd45f8702"). InnerVolumeSpecName "kube-api-access-pvjw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:32:48 crc kubenswrapper[5050]: I0131 06:32:48.830879 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvjw7\" (UniqueName: \"kubernetes.io/projected/24d8a0b0-59b1-4f7b-b722-432fd45f8702-kube-api-access-pvjw7\") on node \"crc\" DevicePath \"\"" Jan 31 06:32:49 crc kubenswrapper[5050]: I0131 06:32:49.577850 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bb302e08f8de21dc2e745524ad6a48f642a68eebf8831bcdd80c3ae7d4ec379" Jan 31 06:32:49 crc kubenswrapper[5050]: I0131 06:32:49.577938 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qbr8s/crc-debug-j9xgp" Jan 31 06:32:49 crc kubenswrapper[5050]: I0131 06:32:49.759843 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d8a0b0-59b1-4f7b-b722-432fd45f8702" path="/var/lib/kubelet/pods/24d8a0b0-59b1-4f7b-b722-432fd45f8702/volumes" Jan 31 06:32:49 crc kubenswrapper[5050]: I0131 06:32:49.913215 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qbr8s/crc-debug-8q9xd"] Jan 31 06:32:49 crc kubenswrapper[5050]: E0131 06:32:49.913684 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d8a0b0-59b1-4f7b-b722-432fd45f8702" containerName="container-00" Jan 31 06:32:49 crc kubenswrapper[5050]: I0131 06:32:49.913765 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d8a0b0-59b1-4f7b-b722-432fd45f8702" containerName="container-00" Jan 31 06:32:49 crc kubenswrapper[5050]: I0131 06:32:49.914067 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d8a0b0-59b1-4f7b-b722-432fd45f8702" containerName="container-00" Jan 31 06:32:49 crc kubenswrapper[5050]: I0131 06:32:49.914855 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qbr8s/crc-debug-8q9xd" Jan 31 06:32:49 crc kubenswrapper[5050]: I0131 06:32:49.958169 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzrl\" (UniqueName: \"kubernetes.io/projected/483a26f4-cae7-447c-9104-d4f201c575d0-kube-api-access-6fzrl\") pod \"crc-debug-8q9xd\" (UID: \"483a26f4-cae7-447c-9104-d4f201c575d0\") " pod="openshift-must-gather-qbr8s/crc-debug-8q9xd" Jan 31 06:32:49 crc kubenswrapper[5050]: I0131 06:32:49.958220 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/483a26f4-cae7-447c-9104-d4f201c575d0-host\") pod \"crc-debug-8q9xd\" (UID: \"483a26f4-cae7-447c-9104-d4f201c575d0\") " pod="openshift-must-gather-qbr8s/crc-debug-8q9xd" Jan 31 06:32:50 crc kubenswrapper[5050]: I0131 06:32:50.059644 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzrl\" (UniqueName: \"kubernetes.io/projected/483a26f4-cae7-447c-9104-d4f201c575d0-kube-api-access-6fzrl\") pod \"crc-debug-8q9xd\" (UID: \"483a26f4-cae7-447c-9104-d4f201c575d0\") " pod="openshift-must-gather-qbr8s/crc-debug-8q9xd" Jan 31 06:32:50 crc kubenswrapper[5050]: I0131 06:32:50.059681 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/483a26f4-cae7-447c-9104-d4f201c575d0-host\") pod \"crc-debug-8q9xd\" (UID: \"483a26f4-cae7-447c-9104-d4f201c575d0\") " pod="openshift-must-gather-qbr8s/crc-debug-8q9xd" Jan 31 06:32:50 crc kubenswrapper[5050]: I0131 06:32:50.059934 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/483a26f4-cae7-447c-9104-d4f201c575d0-host\") pod \"crc-debug-8q9xd\" (UID: \"483a26f4-cae7-447c-9104-d4f201c575d0\") " pod="openshift-must-gather-qbr8s/crc-debug-8q9xd" Jan 31 06:32:50 crc kubenswrapper[5050]: I0131 06:32:50.077557 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzrl\" (UniqueName: \"kubernetes.io/projected/483a26f4-cae7-447c-9104-d4f201c575d0-kube-api-access-6fzrl\") pod \"crc-debug-8q9xd\" (UID: \"483a26f4-cae7-447c-9104-d4f201c575d0\") " pod="openshift-must-gather-qbr8s/crc-debug-8q9xd" Jan 31 06:32:50 crc kubenswrapper[5050]: I0131 06:32:50.262150 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qbr8s/crc-debug-8q9xd" Jan 31 06:32:50 crc kubenswrapper[5050]: W0131 06:32:50.307867 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod483a26f4_cae7_447c_9104_d4f201c575d0.slice/crio-ebb39e154d0517adfc910278d484b5d35fe235df8507320910691eff7b172280 WatchSource:0}: Error finding container ebb39e154d0517adfc910278d484b5d35fe235df8507320910691eff7b172280: Status 404 returned error can't find the container with id ebb39e154d0517adfc910278d484b5d35fe235df8507320910691eff7b172280 Jan 31 06:32:50 crc kubenswrapper[5050]: I0131 06:32:50.594184 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qbr8s/crc-debug-8q9xd" event={"ID":"483a26f4-cae7-447c-9104-d4f201c575d0","Type":"ContainerStarted","Data":"ebb39e154d0517adfc910278d484b5d35fe235df8507320910691eff7b172280"} Jan 31 06:32:50 crc kubenswrapper[5050]: I0131 06:32:50.597832 5050 generic.go:334] "Generic (PLEG): container finished" podID="feacdde8-2174-4336-869c-8c7b6c7b3542" containerID="7bdcf36d607530588f729e410b1d837c90a6457c5e4c9734e22929d22cf6a586" exitCode=0 Jan 31 06:32:50 crc kubenswrapper[5050]: I0131 06:32:50.597911 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4wvmk" event={"ID":"feacdde8-2174-4336-869c-8c7b6c7b3542","Type":"ContainerDied","Data":"7bdcf36d607530588f729e410b1d837c90a6457c5e4c9734e22929d22cf6a586"} Jan 31 06:32:51 crc kubenswrapper[5050]: I0131 06:32:51.608143 5050 generic.go:334] "Generic (PLEG): container finished" podID="483a26f4-cae7-447c-9104-d4f201c575d0" containerID="6289fe6203ff0b0d039e10f975916a4d7a71fa5a57ae040ebf11684d5b7fb0be" exitCode=1 Jan 31 06:32:51 crc kubenswrapper[5050]: I0131 06:32:51.608232 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qbr8s/crc-debug-8q9xd" event={"ID":"483a26f4-cae7-447c-9104-d4f201c575d0","Type":"ContainerDied","Data":"6289fe6203ff0b0d039e10f975916a4d7a71fa5a57ae040ebf11684d5b7fb0be"} Jan 31 06:32:51 crc kubenswrapper[5050]: I0131 06:32:51.610870 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4wvmk" event={"ID":"feacdde8-2174-4336-869c-8c7b6c7b3542","Type":"ContainerStarted","Data":"419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24"} Jan 31 06:32:51 crc kubenswrapper[5050]: I0131 06:32:51.670470 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4wvmk" podStartSLOduration=2.042595767 podStartE2EDuration="5.670449793s" podCreationTimestamp="2026-01-31 06:32:46 +0000 UTC" firstStartedPulling="2026-01-31 06:32:47.560570335 +0000 UTC m=+4292.609731931" lastFinishedPulling="2026-01-31 06:32:51.188424361 +0000 UTC m=+4296.237585957" observedRunningTime="2026-01-31 06:32:51.657334219 +0000 UTC m=+4296.706495815" watchObservedRunningTime="2026-01-31 06:32:51.670449793 +0000 UTC m=+4296.719611389" Jan 31 06:32:51 crc kubenswrapper[5050]: I0131 06:32:51.703619 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qbr8s/crc-debug-8q9xd"] Jan 31 06:32:51 crc kubenswrapper[5050]: I0131 06:32:51.720347 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qbr8s/crc-debug-8q9xd"] Jan 31 06:32:52 crc kubenswrapper[5050]: I0131 06:32:52.758317 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qbr8s/crc-debug-8q9xd" Jan 31 06:32:52 crc kubenswrapper[5050]: I0131 06:32:52.815524 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/483a26f4-cae7-447c-9104-d4f201c575d0-host\") pod \"483a26f4-cae7-447c-9104-d4f201c575d0\" (UID: \"483a26f4-cae7-447c-9104-d4f201c575d0\") " Jan 31 06:32:52 crc kubenswrapper[5050]: I0131 06:32:52.815715 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483a26f4-cae7-447c-9104-d4f201c575d0-host" (OuterVolumeSpecName: "host") pod "483a26f4-cae7-447c-9104-d4f201c575d0" (UID: "483a26f4-cae7-447c-9104-d4f201c575d0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 06:32:52 crc kubenswrapper[5050]: I0131 06:32:52.816060 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fzrl\" (UniqueName: \"kubernetes.io/projected/483a26f4-cae7-447c-9104-d4f201c575d0-kube-api-access-6fzrl\") pod \"483a26f4-cae7-447c-9104-d4f201c575d0\" (UID: \"483a26f4-cae7-447c-9104-d4f201c575d0\") " Jan 31 06:32:52 crc kubenswrapper[5050]: I0131 06:32:52.817627 5050 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/483a26f4-cae7-447c-9104-d4f201c575d0-host\") on node \"crc\" DevicePath \"\"" Jan 31 06:32:52 crc kubenswrapper[5050]: I0131 06:32:52.832266 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483a26f4-cae7-447c-9104-d4f201c575d0-kube-api-access-6fzrl" (OuterVolumeSpecName: "kube-api-access-6fzrl") pod "483a26f4-cae7-447c-9104-d4f201c575d0" (UID: "483a26f4-cae7-447c-9104-d4f201c575d0"). InnerVolumeSpecName "kube-api-access-6fzrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:32:52 crc kubenswrapper[5050]: I0131 06:32:52.920073 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fzrl\" (UniqueName: \"kubernetes.io/projected/483a26f4-cae7-447c-9104-d4f201c575d0-kube-api-access-6fzrl\") on node \"crc\" DevicePath \"\"" Jan 31 06:32:53 crc kubenswrapper[5050]: I0131 06:32:53.629550 5050 scope.go:117] "RemoveContainer" containerID="6289fe6203ff0b0d039e10f975916a4d7a71fa5a57ae040ebf11684d5b7fb0be" Jan 31 06:32:53 crc kubenswrapper[5050]: I0131 06:32:53.629692 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qbr8s/crc-debug-8q9xd" Jan 31 06:32:53 crc kubenswrapper[5050]: I0131 06:32:53.750940 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="483a26f4-cae7-447c-9104-d4f201c575d0" path="/var/lib/kubelet/pods/483a26f4-cae7-447c-9104-d4f201c575d0/volumes" Jan 31 06:32:56 crc kubenswrapper[5050]: I0131 06:32:56.543244 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:56 crc kubenswrapper[5050]: I0131 06:32:56.543690 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:56 crc kubenswrapper[5050]: I0131 06:32:56.594795 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:56 crc kubenswrapper[5050]: I0131 06:32:56.721626 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:56 crc kubenswrapper[5050]: I0131 06:32:56.837089 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4wvmk"] Jan 31 06:32:58 crc kubenswrapper[5050]: I0131 06:32:58.728907 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4wvmk" podUID="feacdde8-2174-4336-869c-8c7b6c7b3542" containerName="registry-server" containerID="cri-o://419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24" gracePeriod=2 Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.736439 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.744146 5050 generic.go:334] "Generic (PLEG): container finished" podID="feacdde8-2174-4336-869c-8c7b6c7b3542" containerID="419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24" exitCode=0 Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.744248 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4wvmk" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.749239 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4wvmk" event={"ID":"feacdde8-2174-4336-869c-8c7b6c7b3542","Type":"ContainerDied","Data":"419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24"} Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.749310 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4wvmk" event={"ID":"feacdde8-2174-4336-869c-8c7b6c7b3542","Type":"ContainerDied","Data":"a3fb5781ddbe7627a23199ee788d0dcbc8e6cf350fe9319860c95b8b27b8e652"} Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.749340 5050 scope.go:117] "RemoveContainer" containerID="419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.795970 5050 scope.go:117] "RemoveContainer" containerID="7bdcf36d607530588f729e410b1d837c90a6457c5e4c9734e22929d22cf6a586" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.820837 5050 scope.go:117] "RemoveContainer" containerID="3d2610f7aee5f54c494f280e4e46edac4db23d8148cf92b6171350795e4d0279" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.867887 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feacdde8-2174-4336-869c-8c7b6c7b3542-utilities\") pod \"feacdde8-2174-4336-869c-8c7b6c7b3542\" (UID: \"feacdde8-2174-4336-869c-8c7b6c7b3542\") " Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.868123 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feacdde8-2174-4336-869c-8c7b6c7b3542-catalog-content\") pod \"feacdde8-2174-4336-869c-8c7b6c7b3542\" (UID: \"feacdde8-2174-4336-869c-8c7b6c7b3542\") " Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.868229 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvb5n\" (UniqueName: \"kubernetes.io/projected/feacdde8-2174-4336-869c-8c7b6c7b3542-kube-api-access-cvb5n\") pod \"feacdde8-2174-4336-869c-8c7b6c7b3542\" (UID: \"feacdde8-2174-4336-869c-8c7b6c7b3542\") " Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.868663 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feacdde8-2174-4336-869c-8c7b6c7b3542-utilities" (OuterVolumeSpecName: "utilities") pod "feacdde8-2174-4336-869c-8c7b6c7b3542" (UID: "feacdde8-2174-4336-869c-8c7b6c7b3542"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.868943 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feacdde8-2174-4336-869c-8c7b6c7b3542-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.869935 5050 scope.go:117] "RemoveContainer" containerID="419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24" Jan 31 06:32:59 crc kubenswrapper[5050]: E0131 06:32:59.870446 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24\": container with ID starting with 419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24 not found: ID does not exist" containerID="419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.870479 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24"} err="failed to get container status \"419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24\": rpc error: code = NotFound desc = could not find container \"419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24\": container with ID starting with 419500ecbd364ad917701fe49909d50dca26640c541b380905269642b7785e24 not found: ID does not exist" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.870506 5050 scope.go:117] "RemoveContainer" containerID="7bdcf36d607530588f729e410b1d837c90a6457c5e4c9734e22929d22cf6a586" Jan 31 06:32:59 crc kubenswrapper[5050]: E0131 06:32:59.870838 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bdcf36d607530588f729e410b1d837c90a6457c5e4c9734e22929d22cf6a586\": container with ID starting with 7bdcf36d607530588f729e410b1d837c90a6457c5e4c9734e22929d22cf6a586 not found: ID does not exist" containerID="7bdcf36d607530588f729e410b1d837c90a6457c5e4c9734e22929d22cf6a586" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.870861 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bdcf36d607530588f729e410b1d837c90a6457c5e4c9734e22929d22cf6a586"} err="failed to get container status \"7bdcf36d607530588f729e410b1d837c90a6457c5e4c9734e22929d22cf6a586\": rpc error: code = NotFound desc = could not find container \"7bdcf36d607530588f729e410b1d837c90a6457c5e4c9734e22929d22cf6a586\": container with ID starting with 7bdcf36d607530588f729e410b1d837c90a6457c5e4c9734e22929d22cf6a586 not found: ID does not exist" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.870874 5050 scope.go:117] "RemoveContainer" containerID="3d2610f7aee5f54c494f280e4e46edac4db23d8148cf92b6171350795e4d0279" Jan 31 06:32:59 crc kubenswrapper[5050]: E0131 06:32:59.871311 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2610f7aee5f54c494f280e4e46edac4db23d8148cf92b6171350795e4d0279\": container with ID starting with 3d2610f7aee5f54c494f280e4e46edac4db23d8148cf92b6171350795e4d0279 not found: ID does not exist" containerID="3d2610f7aee5f54c494f280e4e46edac4db23d8148cf92b6171350795e4d0279" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.871337 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2610f7aee5f54c494f280e4e46edac4db23d8148cf92b6171350795e4d0279"} err="failed to get container status \"3d2610f7aee5f54c494f280e4e46edac4db23d8148cf92b6171350795e4d0279\": rpc error: code = NotFound desc = could not find container \"3d2610f7aee5f54c494f280e4e46edac4db23d8148cf92b6171350795e4d0279\": container with ID starting with 3d2610f7aee5f54c494f280e4e46edac4db23d8148cf92b6171350795e4d0279 not found: ID does not exist" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.874761 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feacdde8-2174-4336-869c-8c7b6c7b3542-kube-api-access-cvb5n" (OuterVolumeSpecName: "kube-api-access-cvb5n") pod "feacdde8-2174-4336-869c-8c7b6c7b3542" (UID: "feacdde8-2174-4336-869c-8c7b6c7b3542"). InnerVolumeSpecName "kube-api-access-cvb5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.904224 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feacdde8-2174-4336-869c-8c7b6c7b3542-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "feacdde8-2174-4336-869c-8c7b6c7b3542" (UID: "feacdde8-2174-4336-869c-8c7b6c7b3542"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.971582 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvb5n\" (UniqueName: \"kubernetes.io/projected/feacdde8-2174-4336-869c-8c7b6c7b3542-kube-api-access-cvb5n\") on node \"crc\" DevicePath \"\"" Jan 31 06:32:59 crc kubenswrapper[5050]: I0131 06:32:59.971647 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feacdde8-2174-4336-869c-8c7b6c7b3542-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:33:00 crc kubenswrapper[5050]: I0131 06:33:00.093503 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4wvmk"] Jan 31 06:33:00 crc kubenswrapper[5050]: I0131 06:33:00.101709 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4wvmk"] Jan 31 06:33:01 crc kubenswrapper[5050]: I0131 06:33:01.748317 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feacdde8-2174-4336-869c-8c7b6c7b3542" path="/var/lib/kubelet/pods/feacdde8-2174-4336-869c-8c7b6c7b3542/volumes" Jan 31 06:33:09 crc kubenswrapper[5050]: I0131 06:33:09.018049 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:33:09 crc kubenswrapper[5050]: I0131 06:33:09.018697 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:33:09 crc kubenswrapper[5050]: I0131 06:33:09.018754 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 06:33:09 crc kubenswrapper[5050]: I0131 06:33:09.019567 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5befb432b2b57e36a9e4ab8b626a53f7a42f79cf6fc4e5b91dc28b5896bd449"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 06:33:09 crc kubenswrapper[5050]: I0131 06:33:09.019623 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://b5befb432b2b57e36a9e4ab8b626a53f7a42f79cf6fc4e5b91dc28b5896bd449" gracePeriod=600 Jan 31 06:33:09 crc kubenswrapper[5050]: I0131 06:33:09.843155 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="b5befb432b2b57e36a9e4ab8b626a53f7a42f79cf6fc4e5b91dc28b5896bd449" exitCode=0 Jan 31 06:33:09 crc kubenswrapper[5050]: I0131 06:33:09.843228 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"b5befb432b2b57e36a9e4ab8b626a53f7a42f79cf6fc4e5b91dc28b5896bd449"} Jan 31 06:33:09 crc kubenswrapper[5050]: I0131 06:33:09.843575 5050 scope.go:117] "RemoveContainer" containerID="c3c1c65fd5c799b472571560a40421b166d3af7b41c0dad4ae97c13d81122b7a" Jan 31 06:33:10 crc kubenswrapper[5050]: I0131 06:33:10.854091 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869"} Jan 31 06:33:49 crc kubenswrapper[5050]: I0131 06:33:49.345771 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7995457cdd-4p7kh_844449d4-4111-40ad-9d23-dd9709c1a947/barbican-api/0.log" Jan 31 06:33:49 crc kubenswrapper[5050]: I0131 06:33:49.439785 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7995457cdd-4p7kh_844449d4-4111-40ad-9d23-dd9709c1a947/barbican-api-log/0.log" Jan 31 06:33:49 crc kubenswrapper[5050]: I0131 06:33:49.530933 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c797994c8-m9z4k_8d8a8a39-709e-45ee-8694-2e648feebbae/barbican-keystone-listener/0.log" Jan 31 06:33:49 crc kubenswrapper[5050]: I0131 06:33:49.615824 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c797994c8-m9z4k_8d8a8a39-709e-45ee-8694-2e648feebbae/barbican-keystone-listener-log/0.log" Jan 31 06:33:49 crc kubenswrapper[5050]: I0131 06:33:49.684118 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d7f4bb587-ddb7l_4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b/barbican-worker/0.log" Jan 31 06:33:49 crc kubenswrapper[5050]: I0131 06:33:49.724942 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d7f4bb587-ddb7l_4b0dfbd1-ac80-477b-8dd6-b283bd4e2a6b/barbican-worker-log/0.log" Jan 31 06:33:49 crc kubenswrapper[5050]: I0131 06:33:49.903204 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-m2976_ac858115-58fc-4cae-be54-8fc858f07268/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:33:49 crc kubenswrapper[5050]: I0131 06:33:49.942325 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4ee96caa-81d3-4f74-80ae-2f8b57a94d96/ceilometer-central-agent/1.log" Jan 31 06:33:50 crc kubenswrapper[5050]: I0131 06:33:50.122386 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4ee96caa-81d3-4f74-80ae-2f8b57a94d96/ceilometer-central-agent/0.log" Jan 31 06:33:50 crc kubenswrapper[5050]: I0131 06:33:50.123613 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4ee96caa-81d3-4f74-80ae-2f8b57a94d96/ceilometer-notification-agent/1.log" Jan 31 06:33:50 crc kubenswrapper[5050]: I0131 06:33:50.174870 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4ee96caa-81d3-4f74-80ae-2f8b57a94d96/ceilometer-notification-agent/0.log" Jan 31 06:33:50 crc kubenswrapper[5050]: I0131 06:33:50.177787 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4ee96caa-81d3-4f74-80ae-2f8b57a94d96/proxy-httpd/0.log" Jan 31 06:33:50 crc kubenswrapper[5050]: I0131 06:33:50.336201 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4ee96caa-81d3-4f74-80ae-2f8b57a94d96/sg-core/0.log" Jan 31 06:33:50 crc kubenswrapper[5050]: I0131 06:33:50.364417 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-v6pp4_7961ee40-e6b2-4cf2-9145-ac2b7fdcc4a3/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:33:51 crc kubenswrapper[5050]: I0131 06:33:51.085130 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5086536b-eaed-44e4-8951-ee45e91f09e4/cinder-api-log/0.log" Jan 31 06:33:51 crc kubenswrapper[5050]: I0131 06:33:51.094776 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-q9thm_55e315ea-e973-46ef-bf01-df247abf5353/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:33:51 crc kubenswrapper[5050]: I0131 06:33:51.157648 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5086536b-eaed-44e4-8951-ee45e91f09e4/cinder-api/0.log" Jan 31 06:33:51 crc kubenswrapper[5050]: I0131 06:33:51.408046 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_4914b8b7-fa26-4e58-85e1-c072305954cf/probe/0.log" Jan 31 06:33:51 crc kubenswrapper[5050]: I0131 06:33:51.424974 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_4914b8b7-fa26-4e58-85e1-c072305954cf/cinder-backup/1.log" Jan 31 06:33:51 crc kubenswrapper[5050]: I0131 06:33:51.476738 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_4914b8b7-fa26-4e58-85e1-c072305954cf/cinder-backup/0.log" Jan 31 06:33:51 crc kubenswrapper[5050]: I0131 06:33:51.626393 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7b9ed42c-b571-4eec-b45d-802eaa8cf8b7/cinder-scheduler/1.log" Jan 31 06:33:51 crc kubenswrapper[5050]: I0131 06:33:51.666602 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7b9ed42c-b571-4eec-b45d-802eaa8cf8b7/cinder-scheduler/0.log" Jan 31 06:33:51 crc kubenswrapper[5050]: I0131 06:33:51.769607 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7b9ed42c-b571-4eec-b45d-802eaa8cf8b7/probe/0.log" Jan 31 06:33:51 crc kubenswrapper[5050]: I0131 06:33:51.859103 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1115b898-f052-46bf-886a-489b12a35afb/cinder-volume/1.log" Jan 31 06:33:51 crc kubenswrapper[5050]: I0131 06:33:51.921108 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1115b898-f052-46bf-886a-489b12a35afb/cinder-volume/0.log" Jan 31 06:33:51 crc kubenswrapper[5050]: I0131 06:33:51.973241 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1115b898-f052-46bf-886a-489b12a35afb/probe/0.log" Jan 31 06:33:52 crc kubenswrapper[5050]: I0131 06:33:52.059070 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-fs9gl_4d011c07-7fee-4c90-a77a-387c778675c1/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:33:52 crc kubenswrapper[5050]: I0131 06:33:52.396663 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-twh2n_2b497e20-3e4d-4df3-9194-f922711eb66c/init/0.log" Jan 31 06:33:52 crc kubenswrapper[5050]: I0131 06:33:52.437921 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-nbsdr_483749fc-4acc-4fdf-94b0-359fb3d7a82e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:33:52 crc kubenswrapper[5050]: I0131 06:33:52.485997 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-twh2n_2b497e20-3e4d-4df3-9194-f922711eb66c/init/0.log" Jan 31 06:33:52 crc kubenswrapper[5050]: I0131 06:33:52.511290 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-twh2n_2b497e20-3e4d-4df3-9194-f922711eb66c/dnsmasq-dns/0.log" Jan 31 06:33:52 crc kubenswrapper[5050]: I0131 06:33:52.653033 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fa542e94-2400-4e6d-9576-687a18529d96/glance-httpd/0.log" Jan 31 06:33:52 crc kubenswrapper[5050]: I0131 06:33:52.678091 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fa542e94-2400-4e6d-9576-687a18529d96/glance-log/0.log" Jan 31 06:33:52 crc kubenswrapper[5050]: I0131 06:33:52.844665 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2dd4adbc-b40c-4d55-8f48-b98cefb276dc/glance-httpd/0.log" Jan 31 06:33:52 crc kubenswrapper[5050]: I0131 06:33:52.881988 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2dd4adbc-b40c-4d55-8f48-b98cefb276dc/glance-log/0.log" Jan 31 06:33:53 crc kubenswrapper[5050]: I0131 06:33:53.385097 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-th4qx_af62f2ea-1f56-4d1f-91ce-06b83ca439e6/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:33:53 crc kubenswrapper[5050]: I0131 06:33:53.407134 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2cttr_095509b4-0f95-44ce-aa1d-9ad98503fbac/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:33:53 crc kubenswrapper[5050]: I0131 06:33:53.464582 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86b8468d8-lbt9b_5ab353c6-0ce1-463c-b17c-2346de6787db/horizon/0.log" Jan 31 06:33:53 crc kubenswrapper[5050]: I0131 06:33:53.572620 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86b8468d8-lbt9b_5ab353c6-0ce1-463c-b17c-2346de6787db/horizon-log/0.log" Jan 31 06:33:53 crc kubenswrapper[5050]: I0131 06:33:53.613775 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7bf64f7fd-jlmtk_f146da43-4dcb-46f5-a04b-2c5ef4b11fd8/keystone-api/1.log" Jan 31 06:33:53 crc kubenswrapper[5050]: I0131 06:33:53.836980 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_03c25d40-feaf-4c93-b249-64fe546d1e05/kube-state-metrics/0.log" Jan 31 06:33:54 crc kubenswrapper[5050]: I0131 06:33:54.072730 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29497321-ntj5n_ab1681a7-2cdf-4cfe-a909-91b36ff079aa/keystone-cron/0.log" Jan 31 06:33:54 crc kubenswrapper[5050]: I0131 06:33:54.291611 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2b04e497-f938-4b7b-acbc-372819f1b1db/manila-api-log/0.log" Jan 31 06:33:54 crc kubenswrapper[5050]: I0131 06:33:54.338253 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7bf64f7fd-jlmtk_f146da43-4dcb-46f5-a04b-2c5ef4b11fd8/keystone-api/0.log" Jan 31 06:33:54 crc kubenswrapper[5050]: I0131 06:33:54.455602 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-mgqd5_8d19ba99-9dc4-4ab7-ad29-4b11b0a32b58/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:33:54 crc kubenswrapper[5050]: I0131 06:33:54.505612 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2b04e497-f938-4b7b-acbc-372819f1b1db/manila-api/0.log" Jan 31 06:33:54 crc kubenswrapper[5050]: I0131 06:33:54.595165 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c/manila-scheduler/1.log" Jan 31 06:33:54 crc kubenswrapper[5050]: I0131 06:33:54.604313 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c/probe/0.log" Jan 31 06:33:54 crc kubenswrapper[5050]: I0131 06:33:54.766748 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f9e0474f-b8df-4860-80ad-e852d72f4071/manila-share/1.log" Jan 31 06:33:54 crc kubenswrapper[5050]: I0131 06:33:54.793344 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f9e0474f-b8df-4860-80ad-e852d72f4071/probe/0.log" Jan 31 06:33:55 crc kubenswrapper[5050]: I0131 06:33:55.073872 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f9e0474f-b8df-4860-80ad-e852d72f4071/manila-share/0.log" Jan 31 06:33:55 crc kubenswrapper[5050]: I0131 06:33:55.127680 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_ed123ad3-8f6b-4cbe-bf95-d42e7551dd8c/manila-scheduler/0.log" Jan 31 06:33:55 crc kubenswrapper[5050]: I0131 06:33:55.151031 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86dbc7dc8f-2zfkt_da670c32-ca2c-438a-a05a-bc6e23779a60/neutron-api/0.log" Jan 31 06:33:55 crc kubenswrapper[5050]: I0131 06:33:55.227010 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86dbc7dc8f-2zfkt_da670c32-ca2c-438a-a05a-bc6e23779a60/neutron-httpd/0.log" Jan 31 06:33:55 crc kubenswrapper[5050]: I0131 06:33:55.377731 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-xh977_e08dc69a-2a62-4fdd-878d-88468fec4ef0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:33:55 crc kubenswrapper[5050]: I0131 06:33:55.811665 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_334fee19-d725-4b1f-85f2-03d26fa6e09e/nova-api-log/0.log" Jan 31 06:33:55 crc kubenswrapper[5050]: I0131 06:33:55.836873 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_e5e9f0f0-1757-4e0c-b6c1-289c93df190b/nova-cell0-conductor-conductor/0.log" Jan 31 06:33:55 crc kubenswrapper[5050]: I0131 06:33:55.992115 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_334fee19-d725-4b1f-85f2-03d26fa6e09e/nova-api-api/0.log" Jan 31 06:33:56 crc kubenswrapper[5050]: I0131 06:33:56.168798 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d059773b-c9a5-47db-aade-0f635664fe08/nova-cell1-conductor-conductor/0.log" Jan 31 06:33:56 crc kubenswrapper[5050]: I0131 06:33:56.195328 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6e64bf1b-ff3c-4c6f-baa6-9737fd893d5f/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 06:33:56 crc kubenswrapper[5050]: I0131 06:33:56.382945 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4kxsw_b5ede333-cbdc-4c95-ac45-0ea62a8876f0/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:33:56 crc kubenswrapper[5050]: I0131 06:33:56.501234 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_70ac4fa7-405d-4fc5-b6eb-46774c40cbec/nova-metadata-log/0.log" Jan 31 06:33:56 crc kubenswrapper[5050]: I0131 06:33:56.771262 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a27e3698-4326-47f3-bda5-3f3d44d551a9/nova-scheduler-scheduler/0.log" Jan 31 06:33:56 crc kubenswrapper[5050]: I0131 06:33:56.869995 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9d6595e6-419a-4ade-8070-99a41d9c8204/mysql-bootstrap/0.log" Jan 31 06:33:57 crc kubenswrapper[5050]: I0131 06:33:57.055990 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9d6595e6-419a-4ade-8070-99a41d9c8204/galera/0.log" Jan 31 06:33:57 crc kubenswrapper[5050]: I0131 06:33:57.120945 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9d6595e6-419a-4ade-8070-99a41d9c8204/mysql-bootstrap/0.log" Jan 31 06:33:57 crc kubenswrapper[5050]: I0131 06:33:57.267415 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1/mysql-bootstrap/0.log" Jan 31 06:33:57 crc kubenswrapper[5050]: I0131 06:33:57.473751 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1/mysql-bootstrap/0.log" Jan 31 06:33:57 crc kubenswrapper[5050]: I0131 06:33:57.573930 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6e6c6f49-ca24-4f12-b7c1-32b33a5de8c1/galera/0.log" Jan 31 06:33:57 crc kubenswrapper[5050]: I0131 06:33:57.758874 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_58791cf1-4858-4849-9ada-2a41e6df553e/openstackclient/0.log" Jan 31 06:33:57 crc kubenswrapper[5050]: I0131 06:33:57.817801 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-grlfx_5eca93ff-3985-4e89-9254-a5d2a94793d6/ovn-controller/0.log" Jan 31 06:33:57 crc kubenswrapper[5050]: I0131 06:33:57.894505 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_70ac4fa7-405d-4fc5-b6eb-46774c40cbec/nova-metadata-metadata/0.log" Jan 31 06:33:58 crc kubenswrapper[5050]: I0131 06:33:58.015170 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-7ddmz_82b2b313-a37f-4405-a49a-456f3c88ceb3/openstack-network-exporter/0.log" Jan 31 06:33:58 crc kubenswrapper[5050]: I0131 06:33:58.099819 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p2rnn_23898a5e-f7c6-473b-a882-c91ed8ff2e06/ovsdb-server-init/0.log" Jan 31 06:33:58 crc kubenswrapper[5050]: I0131 06:33:58.312140 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p2rnn_23898a5e-f7c6-473b-a882-c91ed8ff2e06/ovsdb-server-init/0.log" Jan 31 06:33:58 crc kubenswrapper[5050]: I0131 06:33:58.374036 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p2rnn_23898a5e-f7c6-473b-a882-c91ed8ff2e06/ovsdb-server/0.log" Jan 31 06:33:58 crc kubenswrapper[5050]: I0131 06:33:58.381824 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p2rnn_23898a5e-f7c6-473b-a882-c91ed8ff2e06/ovs-vswitchd/0.log" Jan 31 06:33:58 crc kubenswrapper[5050]: I0131 06:33:58.584870 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-6cs7v_6fcd0150-c73a-45de-ab72-f6e05ff00b42/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:33:58 crc kubenswrapper[5050]: I0131 06:33:58.607018 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b719fd5c-6f02-4b14-9807-8752304791e4/ovn-northd/0.log" Jan 31 06:33:58 crc kubenswrapper[5050]: I0131 06:33:58.617624 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b719fd5c-6f02-4b14-9807-8752304791e4/openstack-network-exporter/0.log" Jan 31 06:33:58 crc kubenswrapper[5050]: I0131 06:33:58.792322 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_44932166-fbc5-41a4-bdf6-a3931dcbe9f0/openstack-network-exporter/0.log" Jan 31 06:33:58 crc kubenswrapper[5050]: I0131 06:33:58.874008 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_44932166-fbc5-41a4-bdf6-a3931dcbe9f0/ovsdbserver-nb/0.log" Jan 31 06:33:59 crc kubenswrapper[5050]: I0131 06:33:59.067527 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0c3ec6f4-fbc1-40cd-bbcc-a3910770af49/openstack-network-exporter/0.log" Jan 31 06:33:59 crc kubenswrapper[5050]: I0131 06:33:59.093501 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0c3ec6f4-fbc1-40cd-bbcc-a3910770af49/ovsdbserver-sb/0.log" Jan 31 06:33:59 crc kubenswrapper[5050]: I0131 06:33:59.259411 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d9964f68-4b9hp_9b6bfc63-e5ee-4960-8b68-bf9be807990c/placement-api/0.log" Jan 31 06:33:59 crc kubenswrapper[5050]: I0131 06:33:59.999157 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8afb23c2-9926-4b29-b474-ba4f89f261aa/setup-container/0.log" Jan 31 06:34:00 crc kubenswrapper[5050]: I0131 06:34:00.002385 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d9964f68-4b9hp_9b6bfc63-e5ee-4960-8b68-bf9be807990c/placement-log/0.log" Jan 31 06:34:00 crc kubenswrapper[5050]: I0131 06:34:00.223882 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8afb23c2-9926-4b29-b474-ba4f89f261aa/setup-container/0.log" Jan 31 06:34:00 crc kubenswrapper[5050]: I0131 06:34:00.239761 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8afb23c2-9926-4b29-b474-ba4f89f261aa/rabbitmq/0.log" Jan 31 06:34:00 crc kubenswrapper[5050]: I0131 06:34:00.311876 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2ec9e71b-ac09-44f7-8e06-6b628508c7ad/setup-container/0.log" Jan 31 06:34:00 crc kubenswrapper[5050]: I0131 06:34:00.572198 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-xs5xc_908ba466-3385-45bb-8c51-22e8142da678/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:34:00 crc kubenswrapper[5050]: I0131 06:34:00.591916 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2ec9e71b-ac09-44f7-8e06-6b628508c7ad/setup-container/0.log" Jan 31 06:34:00 crc kubenswrapper[5050]: I0131 06:34:00.616992 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2ec9e71b-ac09-44f7-8e06-6b628508c7ad/rabbitmq/0.log" Jan 31 06:34:00 crc kubenswrapper[5050]: I0131 06:34:00.839010 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-vph6c_8eabec3b-eead-4a45-9836-2a4985f344fc/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:34:00 crc kubenswrapper[5050]: I0131 06:34:00.936699 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-n5rc7_6cbec4ed-d7d5-45f2-8919-96d339becbba/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:34:01 crc kubenswrapper[5050]: I0131 06:34:01.745388 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-ddd6l_54089fa6-6fa3-4f57-a554-eb47674a935f/ssh-known-hosts-edpm-deployment/0.log" Jan 31 06:34:01 crc kubenswrapper[5050]: I0131 06:34:01.903211 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_0d50c6e8-cdc2-4d84-ae91-a7d5d5f3290b/test-operator-logs-container/0.log" Jan 31 06:34:02 crc kubenswrapper[5050]: I0131 06:34:02.126867 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vllq6_050ca660-6308-4916-8b90-a3bffeca8e39/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 06:34:02 crc kubenswrapper[5050]: I0131 06:34:02.303556 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_35f7d3c2-6102-4838-ae18-e42d9d69e172/tempest-tests-tempest-tests-runner/0.log" Jan 31 06:34:07 crc kubenswrapper[5050]: I0131 06:34:07.560314 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_92f101f3-10e7-4e7f-a980-ce6a40e6e042/memcached/0.log" Jan 31 06:34:29 crc kubenswrapper[5050]: I0131 06:34:29.012775 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl_17fa3b32-c974-4d30-be03-3d92d42e9a79/util/0.log" Jan 31 06:34:29 crc kubenswrapper[5050]: I0131 06:34:29.196944 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl_17fa3b32-c974-4d30-be03-3d92d42e9a79/pull/0.log" Jan 31 06:34:29 crc kubenswrapper[5050]: I0131 06:34:29.197114 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl_17fa3b32-c974-4d30-be03-3d92d42e9a79/pull/0.log" Jan 31 06:34:29 crc kubenswrapper[5050]: I0131 06:34:29.254363 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl_17fa3b32-c974-4d30-be03-3d92d42e9a79/util/0.log" Jan 31 06:34:29 crc kubenswrapper[5050]: I0131 06:34:29.375390 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl_17fa3b32-c974-4d30-be03-3d92d42e9a79/pull/0.log" Jan 31 06:34:29 crc kubenswrapper[5050]: I0131 06:34:29.381421 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl_17fa3b32-c974-4d30-be03-3d92d42e9a79/util/0.log" Jan 31 06:34:29 crc kubenswrapper[5050]: I0131 06:34:29.398479 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c1a2d3607d58142dda83d5132055f0ea1f878317f6fa3ea40f4518e948fcpl_17fa3b32-c974-4d30-be03-3d92d42e9a79/extract/0.log" Jan 31 06:34:29 crc kubenswrapper[5050]: I0131 06:34:29.637183 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-sz8cj_c8455073-ced2-40e7-931f-ca08690af6d1/manager/0.log" Jan 31 06:34:29 crc kubenswrapper[5050]: I0131 06:34:29.680636 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-k54v4_97258518-ab25-46fa-85b3-bf5c65982b69/manager/0.log" Jan 31 06:34:29 crc kubenswrapper[5050]: I0131 06:34:29.857254 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-vfcdz_edcfb389-aa48-48d3-a408-624b6d081495/manager/0.log" Jan 31 06:34:29 crc kubenswrapper[5050]: I0131 06:34:29.992187 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-fvrm9_6c99a6ca-0409-48ea-ab61-681b887f2f6f/manager/0.log" Jan 31 06:34:30 crc kubenswrapper[5050]: I0131 06:34:30.061335 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-qh226_0cca343b-0815-48b8-a05b-9246a0235ee7/manager/0.log" Jan 31 06:34:30 crc kubenswrapper[5050]: I0131 06:34:30.125648 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-gb8gp_3f77e259-db73-4420-9448-3d1239afe25f/manager/0.log" Jan 31 06:34:30 crc kubenswrapper[5050]: I0131 06:34:30.352209 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-w5w6h_04bad2b0-6148-463e-a419-fa6c1526306c/manager/0.log" Jan 31 06:34:30 crc kubenswrapper[5050]: I0131 06:34:30.574566 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-v96rv_702ce305-8b7b-445c-9d94-442b12074572/manager/0.log" Jan 31 06:34:30 crc kubenswrapper[5050]: I0131 06:34:30.994943 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-x5vs7_a072243c-9f79-4f43-86c1-7a0275aadc2d/manager/0.log" Jan 31 06:34:31 crc kubenswrapper[5050]: I0131 06:34:31.057245 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-669699fbb-92tbj_2068f2d0-6afa-4df6-9d4b-37ea15900379/manager/0.log" Jan 31 06:34:31 crc kubenswrapper[5050]: I0131 06:34:31.161759 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-tts92_5a2adbe7-1023-4099-a956-864a1dc07459/manager/0.log" Jan 31 06:34:31 crc kubenswrapper[5050]: I0131 06:34:31.287940 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-52jtl_6c249ee1-fe54-4869-a25c-b84eea14bb5c/manager/0.log" Jan 31 06:34:31 crc kubenswrapper[5050]: I0131 06:34:31.452005 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-dg6bt_e02cf864-b078-4d57-b75a-0f6637da6869/manager/0.log" Jan 31 06:34:31 crc kubenswrapper[5050]: I0131 06:34:31.453477 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-768m8_30ec4f54-d1f8-49dd-b254-7b560b08905e/manager/0.log" Jan 31 06:34:31 crc kubenswrapper[5050]: I0131 06:34:31.619016 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dp8phb_1cb7e321-484b-42e4-a276-0d27a7c5fc95/manager/0.log" Jan 31 06:34:31 crc kubenswrapper[5050]: I0131 06:34:31.753155 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-87557d48c-pffz5_604148c5-02b3-442c-b7d1-9e1434d74a2c/operator/0.log" Jan 31 06:34:31 crc kubenswrapper[5050]: I0131 06:34:31.979021 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-kvlgc_30aa3656-81d7-47e5-8671-db7d2b566aca/registry-server/0.log" Jan 31 06:34:32 crc kubenswrapper[5050]: I0131 06:34:32.170501 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-57zph_dae8abbe-3616-42f7-875f-454d03bda074/manager/0.log" Jan 31 06:34:32 crc kubenswrapper[5050]: I0131 06:34:32.239984 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-7sw6k_e90fb46f-4a14-4b3a-a330-418fce2fec93/manager/0.log" Jan 31 06:34:33 crc kubenswrapper[5050]: I0131 06:34:33.056614 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5sbj8_afb534c6-c882-4e20-b9d3-c4e732f60471/operator/0.log" Jan 31 06:34:33 crc kubenswrapper[5050]: I0131 06:34:33.205359 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-wb8ls_1605a776-2594-4959-a36e-70245cce24b4/manager/0.log" Jan 31 06:34:33 crc kubenswrapper[5050]: I0131 06:34:33.236367 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c7cc9dd76-c9qds_a5c20cf0-d535-4809-a555-7f439ebcc243/manager/0.log" Jan 31 06:34:33 crc kubenswrapper[5050]: I0131 06:34:33.378721 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-9kr84_6fbf0eab-4931-4bb4-b894-95fb1f32407d/manager/0.log" Jan 31 06:34:33 crc kubenswrapper[5050]: I0131 06:34:33.473757 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-v62tj_a56fce84-913d-42bd-9afe-8831d997c58f/manager/0.log" Jan 31 06:34:33 crc kubenswrapper[5050]: I0131 06:34:33.523060 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-fcsch_4c7e8a65-3a04-4036-94bf-5df463991788/manager/0.log" Jan 31 06:34:53 crc kubenswrapper[5050]: I0131 06:34:53.245268 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-x746s_2db7527f-a8bb-431d-ab1c-32c2278822aa/control-plane-machine-set-operator/0.log" Jan 31 06:34:53 crc kubenswrapper[5050]: I0131 06:34:53.384665 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-h7wkt_e1c8049f-1b60-4e5c-a547-df42a78a841e/kube-rbac-proxy/0.log" Jan 31 06:34:53 crc kubenswrapper[5050]: I0131 06:34:53.436936 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-h7wkt_e1c8049f-1b60-4e5c-a547-df42a78a841e/machine-api-operator/0.log" Jan 31 06:35:06 crc kubenswrapper[5050]: I0131 06:35:06.654523 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-66tc2_123237fa-3f5a-4153-88c6-0f0efc20738d/cert-manager-controller/0.log" Jan 31 06:35:06 crc kubenswrapper[5050]: I0131 06:35:06.679356 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-z8ctn_e248a1d5-f588-44e2-ad44-87016c519de8/cert-manager-cainjector/0.log" Jan 31 06:35:06 crc kubenswrapper[5050]: I0131 06:35:06.754715 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-m5rsc_8d72a638-d293-4df5-b8c0-dcf876f1fa3d/cert-manager-webhook/0.log" Jan 31 06:35:18 crc kubenswrapper[5050]: I0131 06:35:18.625756 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-t99rk_d76c2e37-0e5c-4a24-8bb3-ff5f7cee2bf7/nmstate-console-plugin/0.log" Jan 31 06:35:18 crc kubenswrapper[5050]: I0131 06:35:18.771709 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9bxhq_68a91a5f-abd1-4f99-8417-e208ef75a82e/nmstate-handler/0.log" Jan 31 06:35:18 crc kubenswrapper[5050]: I0131 06:35:18.817774 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vkqdm_fb76c612-cfce-47d5-adaa-d7b10661b9ca/nmstate-metrics/0.log" Jan 31 06:35:18 crc kubenswrapper[5050]: I0131 06:35:18.852739 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vkqdm_fb76c612-cfce-47d5-adaa-d7b10661b9ca/kube-rbac-proxy/0.log" Jan 31 06:35:19 crc kubenswrapper[5050]: I0131 06:35:19.019391 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-jwqtz_4803e93f-9a9e-43eb-8d0f-671abc22f91a/nmstate-operator/0.log" Jan 31 06:35:19 crc kubenswrapper[5050]: I0131 06:35:19.141730 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-fxx9l_048078df-7c17-42ac-96bd-ddcbe64854d3/nmstate-webhook/0.log" Jan 31 06:35:39 crc kubenswrapper[5050]: I0131 06:35:39.018138 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:35:39 crc kubenswrapper[5050]: I0131 06:35:39.018667 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:35:46 crc kubenswrapper[5050]: I0131 06:35:46.642629 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wns9v_f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2/kube-rbac-proxy/0.log" Jan 31 06:35:46 crc kubenswrapper[5050]: I0131 06:35:46.843068 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wns9v_f8c15d1f-cdfa-4ff9-a1f6-77bb5a7bd1d2/controller/0.log" Jan 31 06:35:46 crc kubenswrapper[5050]: I0131 06:35:46.931004 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/cp-frr-files/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.071346 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/cp-reloader/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.075569 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/cp-metrics/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.099539 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/cp-frr-files/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.124742 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/cp-reloader/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.323329 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/cp-metrics/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.323376 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/cp-frr-files/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.332413 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/cp-reloader/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.387348 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/cp-metrics/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.495821 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/cp-frr-files/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.517567 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/cp-reloader/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.558124 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/cp-metrics/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.571654 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/controller/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.713139 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/frr-metrics/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.781063 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/kube-rbac-proxy/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.817384 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/kube-rbac-proxy-frr/0.log" Jan 31 06:35:47 crc kubenswrapper[5050]: I0131 06:35:47.932762 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/reloader/0.log" Jan 31 06:35:48 crc kubenswrapper[5050]: I0131 06:35:48.071441 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-t8bqj_c8b6cef5-1a93-4009-9d0e-e6007edca005/frr-k8s-webhook-server/0.log" Jan 31 06:35:48 crc kubenswrapper[5050]: I0131 06:35:48.250540 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-699dfcf9bf-482s8_e0cbccec-0abb-496b-99f5-3dc3e2f884a9/manager/0.log" Jan 31 06:35:48 crc kubenswrapper[5050]: I0131 06:35:48.443358 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5d49b744cb-vrv8m_6a311a66-2a17-4fa9-8da1-1910cca8d327/webhook-server/0.log" Jan 31 06:35:48 crc kubenswrapper[5050]: I0131 06:35:48.648134 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wc77p_3162d0c0-398a-4e7f-9ff9-9bfc3ed25615/kube-rbac-proxy/0.log" Jan 31 06:35:49 crc kubenswrapper[5050]: I0131 06:35:49.172060 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wc77p_3162d0c0-398a-4e7f-9ff9-9bfc3ed25615/speaker/0.log" Jan 31 06:35:49 crc kubenswrapper[5050]: I0131 06:35:49.299801 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sh9db_2a506e19-84f6-4f6e-a6e0-656c7a529151/frr/0.log" Jan 31 06:36:02 crc kubenswrapper[5050]: I0131 06:36:02.512575 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl_3e3a55b6-6044-420d-8d5a-2dd94a073cbd/util/0.log" Jan 31 06:36:02 crc kubenswrapper[5050]: I0131 06:36:02.679026 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl_3e3a55b6-6044-420d-8d5a-2dd94a073cbd/pull/0.log" Jan 31 06:36:02 crc kubenswrapper[5050]: I0131 06:36:02.699805 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl_3e3a55b6-6044-420d-8d5a-2dd94a073cbd/util/0.log" Jan 31 06:36:02 crc kubenswrapper[5050]: I0131 06:36:02.703692 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl_3e3a55b6-6044-420d-8d5a-2dd94a073cbd/pull/0.log" Jan 31 06:36:02 crc kubenswrapper[5050]: I0131 06:36:02.869274 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl_3e3a55b6-6044-420d-8d5a-2dd94a073cbd/util/0.log" Jan 31 06:36:02 crc kubenswrapper[5050]: I0131 06:36:02.903164 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl_3e3a55b6-6044-420d-8d5a-2dd94a073cbd/pull/0.log" Jan 31 06:36:02 crc kubenswrapper[5050]: I0131 06:36:02.964571 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcw9wkl_3e3a55b6-6044-420d-8d5a-2dd94a073cbd/extract/0.log" Jan 31 06:36:03 crc kubenswrapper[5050]: I0131 06:36:03.087423 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx_e6c90b3e-2181-426e-aee2-e92a2694ac1c/util/0.log" Jan 31 06:36:03 crc kubenswrapper[5050]: I0131 06:36:03.239922 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx_e6c90b3e-2181-426e-aee2-e92a2694ac1c/pull/0.log" Jan 31 06:36:03 crc kubenswrapper[5050]: I0131 06:36:03.239977 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx_e6c90b3e-2181-426e-aee2-e92a2694ac1c/pull/0.log" Jan 31 06:36:03 crc kubenswrapper[5050]: I0131 06:36:03.240205 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx_e6c90b3e-2181-426e-aee2-e92a2694ac1c/util/0.log" Jan 31 06:36:03 crc kubenswrapper[5050]: I0131 06:36:03.398837 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx_e6c90b3e-2181-426e-aee2-e92a2694ac1c/pull/0.log" Jan 31 06:36:03 crc kubenswrapper[5050]: I0131 06:36:03.410476 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx_e6c90b3e-2181-426e-aee2-e92a2694ac1c/extract/0.log" Jan 31 06:36:03 crc kubenswrapper[5050]: I0131 06:36:03.529667 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136cgcx_e6c90b3e-2181-426e-aee2-e92a2694ac1c/util/0.log" Jan 31 06:36:03 crc kubenswrapper[5050]: I0131 06:36:03.563580 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gp6l2_5187fc7e-79c1-49e5-8060-aeeed8bd9870/extract-utilities/0.log" Jan 31 06:36:04 crc kubenswrapper[5050]: I0131 06:36:04.218538 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gp6l2_5187fc7e-79c1-49e5-8060-aeeed8bd9870/extract-utilities/0.log" Jan 31 06:36:04 crc kubenswrapper[5050]: I0131 06:36:04.244480 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gp6l2_5187fc7e-79c1-49e5-8060-aeeed8bd9870/extract-content/0.log" Jan 31 06:36:04 crc kubenswrapper[5050]: I0131 06:36:04.268179 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gp6l2_5187fc7e-79c1-49e5-8060-aeeed8bd9870/extract-content/0.log" Jan 31 06:36:04 crc kubenswrapper[5050]: I0131 06:36:04.401681 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gp6l2_5187fc7e-79c1-49e5-8060-aeeed8bd9870/extract-utilities/0.log" Jan 31 06:36:04 crc kubenswrapper[5050]: I0131 06:36:04.404671 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gp6l2_5187fc7e-79c1-49e5-8060-aeeed8bd9870/extract-content/0.log" Jan 31 06:36:04 crc kubenswrapper[5050]: I0131 06:36:04.631873 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c9xmn_3c96568f-0200-4700-99cc-9c386d4fd176/extract-utilities/0.log" Jan 31 06:36:04 crc kubenswrapper[5050]: I0131 06:36:04.830584 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c9xmn_3c96568f-0200-4700-99cc-9c386d4fd176/extract-content/0.log" Jan 31 06:36:04 crc kubenswrapper[5050]: I0131 06:36:04.832613 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c9xmn_3c96568f-0200-4700-99cc-9c386d4fd176/extract-utilities/0.log" Jan 31 06:36:04 crc kubenswrapper[5050]: I0131 06:36:04.864597 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c9xmn_3c96568f-0200-4700-99cc-9c386d4fd176/extract-content/0.log" Jan 31 06:36:05 crc kubenswrapper[5050]: I0131 06:36:05.047268 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c9xmn_3c96568f-0200-4700-99cc-9c386d4fd176/extract-utilities/0.log" Jan 31 06:36:05 crc kubenswrapper[5050]: I0131 06:36:05.057769 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c9xmn_3c96568f-0200-4700-99cc-9c386d4fd176/extract-content/0.log" Jan 31 06:36:05 crc kubenswrapper[5050]: I0131 06:36:05.292780 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-g9x8x_0511caf5-aa17-47ef-b30c-3ba05ec0b8dc/marketplace-operator/0.log" Jan 31 06:36:05 crc kubenswrapper[5050]: I0131 06:36:05.427757 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c9xmn_3c96568f-0200-4700-99cc-9c386d4fd176/registry-server/0.log" Jan 31 06:36:05 crc kubenswrapper[5050]: I0131 06:36:05.504519 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gp6l2_5187fc7e-79c1-49e5-8060-aeeed8bd9870/registry-server/0.log" Jan 31 06:36:06 crc kubenswrapper[5050]: I0131 06:36:06.267674 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w994t_9f72d45d-bc4c-4a9f-97b4-202d3493d7b4/extract-utilities/0.log" Jan 31 06:36:06 crc kubenswrapper[5050]: I0131 06:36:06.420286 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w994t_9f72d45d-bc4c-4a9f-97b4-202d3493d7b4/extract-content/0.log" Jan 31 06:36:06 crc kubenswrapper[5050]: I0131 06:36:06.422517 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w994t_9f72d45d-bc4c-4a9f-97b4-202d3493d7b4/extract-utilities/0.log" Jan 31 06:36:06 crc kubenswrapper[5050]: I0131 06:36:06.423346 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w994t_9f72d45d-bc4c-4a9f-97b4-202d3493d7b4/extract-content/0.log" Jan 31 06:36:06 crc kubenswrapper[5050]: I0131 06:36:06.585642 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w994t_9f72d45d-bc4c-4a9f-97b4-202d3493d7b4/extract-utilities/0.log" Jan 31 06:36:06 crc kubenswrapper[5050]: I0131 06:36:06.603755 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w994t_9f72d45d-bc4c-4a9f-97b4-202d3493d7b4/extract-content/0.log" Jan 31 06:36:06 crc kubenswrapper[5050]: I0131 06:36:06.664978 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pxdml_2a24e898-ac43-489c-a204-f817d6fb32a1/extract-utilities/0.log" Jan 31 06:36:06 crc kubenswrapper[5050]: I0131 06:36:06.828774 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pxdml_2a24e898-ac43-489c-a204-f817d6fb32a1/extract-content/0.log" Jan 31 06:36:06 crc kubenswrapper[5050]: I0131 06:36:06.834260 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pxdml_2a24e898-ac43-489c-a204-f817d6fb32a1/extract-utilities/0.log" Jan 31 06:36:06 crc kubenswrapper[5050]: I0131 06:36:06.836255 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pxdml_2a24e898-ac43-489c-a204-f817d6fb32a1/extract-content/0.log" Jan 31 06:36:07 crc kubenswrapper[5050]: I0131 06:36:07.010985 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pxdml_2a24e898-ac43-489c-a204-f817d6fb32a1/extract-content/0.log" Jan 31 06:36:07 crc kubenswrapper[5050]: I0131 06:36:07.048054 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pxdml_2a24e898-ac43-489c-a204-f817d6fb32a1/extract-utilities/0.log" Jan 31 06:36:07 crc kubenswrapper[5050]: I0131 06:36:07.315274 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w994t_9f72d45d-bc4c-4a9f-97b4-202d3493d7b4/registry-server/0.log" Jan 31 06:36:09 crc kubenswrapper[5050]: I0131 06:36:09.017975 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:36:09 crc kubenswrapper[5050]: I0131 06:36:09.018416 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:36:09 crc kubenswrapper[5050]: I0131 06:36:09.893403 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pxdml_2a24e898-ac43-489c-a204-f817d6fb32a1/registry-server/0.log" Jan 31 06:36:39 crc kubenswrapper[5050]: I0131 06:36:39.017995 5050 patch_prober.go:28] interesting pod/machine-config-daemon-tbf62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 06:36:39 crc kubenswrapper[5050]: I0131 06:36:39.018547 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 06:36:39 crc kubenswrapper[5050]: I0131 06:36:39.018596 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" Jan 31 06:36:39 crc kubenswrapper[5050]: I0131 06:36:39.019352 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869"} pod="openshift-machine-config-operator/machine-config-daemon-tbf62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 06:36:39 crc kubenswrapper[5050]: I0131 06:36:39.019415 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" containerName="machine-config-daemon" containerID="cri-o://68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" gracePeriod=600 Jan 31 06:36:39 crc kubenswrapper[5050]: E0131 06:36:39.160438 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:36:39 crc kubenswrapper[5050]: I0131 06:36:39.686262 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b8394e6-1648-4ba8-970b-242434354d42" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" exitCode=0 Jan 31 06:36:39 crc kubenswrapper[5050]: I0131 06:36:39.686297 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerDied","Data":"68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869"} Jan 31 06:36:39 crc kubenswrapper[5050]: I0131 06:36:39.686601 5050 scope.go:117] "RemoveContainer" containerID="b5befb432b2b57e36a9e4ab8b626a53f7a42f79cf6fc4e5b91dc28b5896bd449" Jan 31 06:36:39 crc kubenswrapper[5050]: I0131 06:36:39.687188 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:36:39 crc kubenswrapper[5050]: E0131 06:36:39.687745 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:36:51 crc kubenswrapper[5050]: I0131 06:36:51.736702 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:36:51 crc kubenswrapper[5050]: E0131 06:36:51.737402 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:37:04 crc kubenswrapper[5050]: I0131 06:37:04.737009 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:37:04 crc kubenswrapper[5050]: E0131 06:37:04.737850 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:37:19 crc kubenswrapper[5050]: I0131 06:37:19.737327 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:37:19 crc kubenswrapper[5050]: E0131 06:37:19.739115 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:37:33 crc kubenswrapper[5050]: I0131 06:37:33.736883 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:37:33 crc kubenswrapper[5050]: E0131 06:37:33.737795 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:37:46 crc kubenswrapper[5050]: I0131 06:37:46.738205 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:37:46 crc kubenswrapper[5050]: E0131 06:37:46.739176 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:38:00 crc kubenswrapper[5050]: I0131 06:38:00.737008 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:38:00 crc kubenswrapper[5050]: E0131 06:38:00.737825 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:38:01 crc kubenswrapper[5050]: I0131 06:38:01.453183 5050 generic.go:334] "Generic (PLEG): container finished" podID="4e8f15e4-442f-4a10-92fa-c0095b4ca407" containerID="d8ab1a78fe4b494ef6d0c074511459a77b59dc2f123902b2f92bfbeb7b3c93d9" exitCode=0 Jan 31 06:38:01 crc kubenswrapper[5050]: I0131 06:38:01.453278 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qbr8s/must-gather-wqzms" event={"ID":"4e8f15e4-442f-4a10-92fa-c0095b4ca407","Type":"ContainerDied","Data":"d8ab1a78fe4b494ef6d0c074511459a77b59dc2f123902b2f92bfbeb7b3c93d9"} Jan 31 06:38:01 crc kubenswrapper[5050]: I0131 06:38:01.454871 5050 scope.go:117] "RemoveContainer" containerID="d8ab1a78fe4b494ef6d0c074511459a77b59dc2f123902b2f92bfbeb7b3c93d9" Jan 31 06:38:02 crc kubenswrapper[5050]: I0131 06:38:02.302289 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qbr8s_must-gather-wqzms_4e8f15e4-442f-4a10-92fa-c0095b4ca407/gather/0.log" Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.073223 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qbr8s/must-gather-wqzms"] Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.074244 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qbr8s/must-gather-wqzms" podUID="4e8f15e4-442f-4a10-92fa-c0095b4ca407" containerName="copy" containerID="cri-o://2ff71f521ccc3fa491a457525b0c747c39fdc6c1a6a81c6dfb54fc3f74169b38" gracePeriod=2 Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.084275 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qbr8s/must-gather-wqzms"] Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.533877 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qbr8s_must-gather-wqzms_4e8f15e4-442f-4a10-92fa-c0095b4ca407/copy/0.log" Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.534487 5050 generic.go:334] "Generic (PLEG): container finished" podID="4e8f15e4-442f-4a10-92fa-c0095b4ca407" containerID="2ff71f521ccc3fa491a457525b0c747c39fdc6c1a6a81c6dfb54fc3f74169b38" exitCode=143 Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.534530 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bb2177c67ef16cf3bb7f9a9373a8095e6cf5b54d9689e0f79f43d46ae24589d" Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.562344 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qbr8s_must-gather-wqzms_4e8f15e4-442f-4a10-92fa-c0095b4ca407/copy/0.log" Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.562777 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qbr8s/must-gather-wqzms" Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.664868 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e8f15e4-442f-4a10-92fa-c0095b4ca407-must-gather-output\") pod \"4e8f15e4-442f-4a10-92fa-c0095b4ca407\" (UID: \"4e8f15e4-442f-4a10-92fa-c0095b4ca407\") " Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.665036 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7dq2\" (UniqueName: \"kubernetes.io/projected/4e8f15e4-442f-4a10-92fa-c0095b4ca407-kube-api-access-n7dq2\") pod \"4e8f15e4-442f-4a10-92fa-c0095b4ca407\" (UID: \"4e8f15e4-442f-4a10-92fa-c0095b4ca407\") " Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.671217 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e8f15e4-442f-4a10-92fa-c0095b4ca407-kube-api-access-n7dq2" (OuterVolumeSpecName: "kube-api-access-n7dq2") pod "4e8f15e4-442f-4a10-92fa-c0095b4ca407" (UID: "4e8f15e4-442f-4a10-92fa-c0095b4ca407"). InnerVolumeSpecName "kube-api-access-n7dq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.767387 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7dq2\" (UniqueName: \"kubernetes.io/projected/4e8f15e4-442f-4a10-92fa-c0095b4ca407-kube-api-access-n7dq2\") on node \"crc\" DevicePath \"\"" Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.833389 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e8f15e4-442f-4a10-92fa-c0095b4ca407-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4e8f15e4-442f-4a10-92fa-c0095b4ca407" (UID: "4e8f15e4-442f-4a10-92fa-c0095b4ca407"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:38:10 crc kubenswrapper[5050]: I0131 06:38:10.870874 5050 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e8f15e4-442f-4a10-92fa-c0095b4ca407-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 06:38:11 crc kubenswrapper[5050]: I0131 06:38:11.542731 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qbr8s/must-gather-wqzms" Jan 31 06:38:11 crc kubenswrapper[5050]: I0131 06:38:11.748899 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e8f15e4-442f-4a10-92fa-c0095b4ca407" path="/var/lib/kubelet/pods/4e8f15e4-442f-4a10-92fa-c0095b4ca407/volumes" Jan 31 06:38:13 crc kubenswrapper[5050]: I0131 06:38:13.736937 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:38:13 crc kubenswrapper[5050]: E0131 06:38:13.738713 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:38:28 crc kubenswrapper[5050]: I0131 06:38:28.736541 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:38:28 crc kubenswrapper[5050]: E0131 06:38:28.737376 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.755752 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jr6b2"] Jan 31 06:38:39 crc kubenswrapper[5050]: E0131 06:38:39.756614 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483a26f4-cae7-447c-9104-d4f201c575d0" containerName="container-00" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.756627 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="483a26f4-cae7-447c-9104-d4f201c575d0" containerName="container-00" Jan 31 06:38:39 crc kubenswrapper[5050]: E0131 06:38:39.756644 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e8f15e4-442f-4a10-92fa-c0095b4ca407" containerName="gather" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.756650 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e8f15e4-442f-4a10-92fa-c0095b4ca407" containerName="gather" Jan 31 06:38:39 crc kubenswrapper[5050]: E0131 06:38:39.756666 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feacdde8-2174-4336-869c-8c7b6c7b3542" containerName="extract-utilities" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.756674 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="feacdde8-2174-4336-869c-8c7b6c7b3542" containerName="extract-utilities" Jan 31 06:38:39 crc kubenswrapper[5050]: E0131 06:38:39.756685 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feacdde8-2174-4336-869c-8c7b6c7b3542" containerName="extract-content" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.756693 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="feacdde8-2174-4336-869c-8c7b6c7b3542" containerName="extract-content" Jan 31 06:38:39 crc kubenswrapper[5050]: E0131 06:38:39.756712 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e8f15e4-442f-4a10-92fa-c0095b4ca407" containerName="copy" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.756719 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e8f15e4-442f-4a10-92fa-c0095b4ca407" containerName="copy" Jan 31 06:38:39 crc kubenswrapper[5050]: E0131 06:38:39.756729 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feacdde8-2174-4336-869c-8c7b6c7b3542" containerName="registry-server" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.756734 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="feacdde8-2174-4336-869c-8c7b6c7b3542" containerName="registry-server" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.756894 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="feacdde8-2174-4336-869c-8c7b6c7b3542" containerName="registry-server" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.756910 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e8f15e4-442f-4a10-92fa-c0095b4ca407" containerName="copy" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.756919 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="483a26f4-cae7-447c-9104-d4f201c575d0" containerName="container-00" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.756936 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e8f15e4-442f-4a10-92fa-c0095b4ca407" containerName="gather" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.758208 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.768128 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jr6b2"] Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.800642 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481e045c-7203-44c6-8c95-83cadc805b1b-catalog-content\") pod \"certified-operators-jr6b2\" (UID: \"481e045c-7203-44c6-8c95-83cadc805b1b\") " pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.801540 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481e045c-7203-44c6-8c95-83cadc805b1b-utilities\") pod \"certified-operators-jr6b2\" (UID: \"481e045c-7203-44c6-8c95-83cadc805b1b\") " pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.801629 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd4lg\" (UniqueName: \"kubernetes.io/projected/481e045c-7203-44c6-8c95-83cadc805b1b-kube-api-access-zd4lg\") pod \"certified-operators-jr6b2\" (UID: \"481e045c-7203-44c6-8c95-83cadc805b1b\") " pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.903115 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481e045c-7203-44c6-8c95-83cadc805b1b-utilities\") pod \"certified-operators-jr6b2\" (UID: \"481e045c-7203-44c6-8c95-83cadc805b1b\") " pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.903185 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd4lg\" (UniqueName: \"kubernetes.io/projected/481e045c-7203-44c6-8c95-83cadc805b1b-kube-api-access-zd4lg\") pod \"certified-operators-jr6b2\" (UID: \"481e045c-7203-44c6-8c95-83cadc805b1b\") " pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.903760 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481e045c-7203-44c6-8c95-83cadc805b1b-catalog-content\") pod \"certified-operators-jr6b2\" (UID: \"481e045c-7203-44c6-8c95-83cadc805b1b\") " pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.903766 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481e045c-7203-44c6-8c95-83cadc805b1b-utilities\") pod \"certified-operators-jr6b2\" (UID: \"481e045c-7203-44c6-8c95-83cadc805b1b\") " pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.904049 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481e045c-7203-44c6-8c95-83cadc805b1b-catalog-content\") pod \"certified-operators-jr6b2\" (UID: \"481e045c-7203-44c6-8c95-83cadc805b1b\") " pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:39 crc kubenswrapper[5050]: I0131 06:38:39.936809 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd4lg\" (UniqueName: \"kubernetes.io/projected/481e045c-7203-44c6-8c95-83cadc805b1b-kube-api-access-zd4lg\") pod \"certified-operators-jr6b2\" (UID: \"481e045c-7203-44c6-8c95-83cadc805b1b\") " pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:40 crc kubenswrapper[5050]: I0131 06:38:40.078512 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:40 crc kubenswrapper[5050]: I0131 06:38:40.684677 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jr6b2"] Jan 31 06:38:40 crc kubenswrapper[5050]: W0131 06:38:40.710354 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod481e045c_7203_44c6_8c95_83cadc805b1b.slice/crio-b37a51a32bd2dfac090615c1e32248de5f095bf5629e7142f5b6e64c5124f992 WatchSource:0}: Error finding container b37a51a32bd2dfac090615c1e32248de5f095bf5629e7142f5b6e64c5124f992: Status 404 returned error can't find the container with id b37a51a32bd2dfac090615c1e32248de5f095bf5629e7142f5b6e64c5124f992 Jan 31 06:38:40 crc kubenswrapper[5050]: I0131 06:38:40.737598 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:38:40 crc kubenswrapper[5050]: E0131 06:38:40.737931 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:38:40 crc kubenswrapper[5050]: I0131 06:38:40.826817 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr6b2" event={"ID":"481e045c-7203-44c6-8c95-83cadc805b1b","Type":"ContainerStarted","Data":"b37a51a32bd2dfac090615c1e32248de5f095bf5629e7142f5b6e64c5124f992"} Jan 31 06:38:41 crc kubenswrapper[5050]: I0131 06:38:41.835816 5050 generic.go:334] "Generic (PLEG): container finished" podID="481e045c-7203-44c6-8c95-83cadc805b1b" containerID="1d225dfbe146b1099164a68f9ef9a37082bf51099e25f86478a56e39c931a4f3" exitCode=0 Jan 31 06:38:41 crc kubenswrapper[5050]: I0131 06:38:41.835897 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr6b2" event={"ID":"481e045c-7203-44c6-8c95-83cadc805b1b","Type":"ContainerDied","Data":"1d225dfbe146b1099164a68f9ef9a37082bf51099e25f86478a56e39c931a4f3"} Jan 31 06:38:41 crc kubenswrapper[5050]: I0131 06:38:41.839924 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 06:38:43 crc kubenswrapper[5050]: I0131 06:38:43.859704 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr6b2" event={"ID":"481e045c-7203-44c6-8c95-83cadc805b1b","Type":"ContainerStarted","Data":"8d26a0d59722ad2808144532c1dd72c8ace14545415108ea2c0a25965be868e6"} Jan 31 06:38:44 crc kubenswrapper[5050]: I0131 06:38:44.870668 5050 generic.go:334] "Generic (PLEG): container finished" podID="481e045c-7203-44c6-8c95-83cadc805b1b" containerID="8d26a0d59722ad2808144532c1dd72c8ace14545415108ea2c0a25965be868e6" exitCode=0 Jan 31 06:38:44 crc kubenswrapper[5050]: I0131 06:38:44.870720 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr6b2" event={"ID":"481e045c-7203-44c6-8c95-83cadc805b1b","Type":"ContainerDied","Data":"8d26a0d59722ad2808144532c1dd72c8ace14545415108ea2c0a25965be868e6"} Jan 31 06:38:46 crc kubenswrapper[5050]: I0131 06:38:46.898328 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr6b2" event={"ID":"481e045c-7203-44c6-8c95-83cadc805b1b","Type":"ContainerStarted","Data":"0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd"} Jan 31 06:38:46 crc kubenswrapper[5050]: I0131 06:38:46.933130 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jr6b2" podStartSLOduration=3.826415903 podStartE2EDuration="7.933110306s" podCreationTimestamp="2026-01-31 06:38:39 +0000 UTC" firstStartedPulling="2026-01-31 06:38:41.839543784 +0000 UTC m=+4646.888705400" lastFinishedPulling="2026-01-31 06:38:45.946238197 +0000 UTC m=+4650.995399803" observedRunningTime="2026-01-31 06:38:46.931436881 +0000 UTC m=+4651.980598497" watchObservedRunningTime="2026-01-31 06:38:46.933110306 +0000 UTC m=+4651.982271902" Jan 31 06:38:50 crc kubenswrapper[5050]: I0131 06:38:50.079639 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:50 crc kubenswrapper[5050]: I0131 06:38:50.080097 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:50 crc kubenswrapper[5050]: I0131 06:38:50.124500 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:38:54 crc kubenswrapper[5050]: I0131 06:38:54.737141 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:38:54 crc kubenswrapper[5050]: E0131 06:38:54.737843 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:38:59 crc kubenswrapper[5050]: I0131 06:38:59.107348 5050 scope.go:117] "RemoveContainer" containerID="2ff71f521ccc3fa491a457525b0c747c39fdc6c1a6a81c6dfb54fc3f74169b38" Jan 31 06:38:59 crc kubenswrapper[5050]: I0131 06:38:59.136128 5050 scope.go:117] "RemoveContainer" containerID="d8ab1a78fe4b494ef6d0c074511459a77b59dc2f123902b2f92bfbeb7b3c93d9" Jan 31 06:38:59 crc kubenswrapper[5050]: I0131 06:38:59.204765 5050 scope.go:117] "RemoveContainer" containerID="7e9b5d983a252475fa070d73cfe6a7ccb66b4f3df586bdea6666e078bb3c4233" Jan 31 06:39:00 crc kubenswrapper[5050]: I0131 06:39:00.172500 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:39:00 crc kubenswrapper[5050]: I0131 06:39:00.247585 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jr6b2"] Jan 31 06:39:01 crc kubenswrapper[5050]: I0131 06:39:01.021475 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jr6b2" podUID="481e045c-7203-44c6-8c95-83cadc805b1b" containerName="registry-server" containerID="cri-o://0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd" gracePeriod=2 Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.024618 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.029848 5050 generic.go:334] "Generic (PLEG): container finished" podID="481e045c-7203-44c6-8c95-83cadc805b1b" containerID="0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd" exitCode=0 Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.029894 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jr6b2" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.029902 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr6b2" event={"ID":"481e045c-7203-44c6-8c95-83cadc805b1b","Type":"ContainerDied","Data":"0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd"} Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.029937 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr6b2" event={"ID":"481e045c-7203-44c6-8c95-83cadc805b1b","Type":"ContainerDied","Data":"b37a51a32bd2dfac090615c1e32248de5f095bf5629e7142f5b6e64c5124f992"} Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.029976 5050 scope.go:117] "RemoveContainer" containerID="0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.065806 5050 scope.go:117] "RemoveContainer" containerID="8d26a0d59722ad2808144532c1dd72c8ace14545415108ea2c0a25965be868e6" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.091474 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd4lg\" (UniqueName: \"kubernetes.io/projected/481e045c-7203-44c6-8c95-83cadc805b1b-kube-api-access-zd4lg\") pod \"481e045c-7203-44c6-8c95-83cadc805b1b\" (UID: \"481e045c-7203-44c6-8c95-83cadc805b1b\") " Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.091525 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481e045c-7203-44c6-8c95-83cadc805b1b-utilities\") pod \"481e045c-7203-44c6-8c95-83cadc805b1b\" (UID: \"481e045c-7203-44c6-8c95-83cadc805b1b\") " Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.091714 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481e045c-7203-44c6-8c95-83cadc805b1b-catalog-content\") pod \"481e045c-7203-44c6-8c95-83cadc805b1b\" (UID: \"481e045c-7203-44c6-8c95-83cadc805b1b\") " Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.092624 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481e045c-7203-44c6-8c95-83cadc805b1b-utilities" (OuterVolumeSpecName: "utilities") pod "481e045c-7203-44c6-8c95-83cadc805b1b" (UID: "481e045c-7203-44c6-8c95-83cadc805b1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.138414 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481e045c-7203-44c6-8c95-83cadc805b1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "481e045c-7203-44c6-8c95-83cadc805b1b" (UID: "481e045c-7203-44c6-8c95-83cadc805b1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.194976 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481e045c-7203-44c6-8c95-83cadc805b1b-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.195041 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481e045c-7203-44c6-8c95-83cadc805b1b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.865166 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481e045c-7203-44c6-8c95-83cadc805b1b-kube-api-access-zd4lg" (OuterVolumeSpecName: "kube-api-access-zd4lg") pod "481e045c-7203-44c6-8c95-83cadc805b1b" (UID: "481e045c-7203-44c6-8c95-83cadc805b1b"). InnerVolumeSpecName "kube-api-access-zd4lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.883889 5050 scope.go:117] "RemoveContainer" containerID="1d225dfbe146b1099164a68f9ef9a37082bf51099e25f86478a56e39c931a4f3" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.904633 5050 scope.go:117] "RemoveContainer" containerID="0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd" Jan 31 06:39:02 crc kubenswrapper[5050]: E0131 06:39:02.905153 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd\": container with ID starting with 0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd not found: ID does not exist" containerID="0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.905202 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd"} err="failed to get container status \"0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd\": rpc error: code = NotFound desc = could not find container \"0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd\": container with ID starting with 0d698c28ac382e548fd78d80c5caf6a2f8cad647496f975a820baaa30e534cdd not found: ID does not exist" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.905235 5050 scope.go:117] "RemoveContainer" containerID="8d26a0d59722ad2808144532c1dd72c8ace14545415108ea2c0a25965be868e6" Jan 31 06:39:02 crc kubenswrapper[5050]: E0131 06:39:02.905781 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d26a0d59722ad2808144532c1dd72c8ace14545415108ea2c0a25965be868e6\": container with ID starting with 8d26a0d59722ad2808144532c1dd72c8ace14545415108ea2c0a25965be868e6 not found: ID does not exist" containerID="8d26a0d59722ad2808144532c1dd72c8ace14545415108ea2c0a25965be868e6" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.905829 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d26a0d59722ad2808144532c1dd72c8ace14545415108ea2c0a25965be868e6"} err="failed to get container status \"8d26a0d59722ad2808144532c1dd72c8ace14545415108ea2c0a25965be868e6\": rpc error: code = NotFound desc = could not find container \"8d26a0d59722ad2808144532c1dd72c8ace14545415108ea2c0a25965be868e6\": container with ID starting with 8d26a0d59722ad2808144532c1dd72c8ace14545415108ea2c0a25965be868e6 not found: ID does not exist" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.905857 5050 scope.go:117] "RemoveContainer" containerID="1d225dfbe146b1099164a68f9ef9a37082bf51099e25f86478a56e39c931a4f3" Jan 31 06:39:02 crc kubenswrapper[5050]: E0131 06:39:02.906214 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d225dfbe146b1099164a68f9ef9a37082bf51099e25f86478a56e39c931a4f3\": container with ID starting with 1d225dfbe146b1099164a68f9ef9a37082bf51099e25f86478a56e39c931a4f3 not found: ID does not exist" containerID="1d225dfbe146b1099164a68f9ef9a37082bf51099e25f86478a56e39c931a4f3" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.906263 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d225dfbe146b1099164a68f9ef9a37082bf51099e25f86478a56e39c931a4f3"} err="failed to get container status \"1d225dfbe146b1099164a68f9ef9a37082bf51099e25f86478a56e39c931a4f3\": rpc error: code = NotFound desc = could not find container \"1d225dfbe146b1099164a68f9ef9a37082bf51099e25f86478a56e39c931a4f3\": container with ID starting with 1d225dfbe146b1099164a68f9ef9a37082bf51099e25f86478a56e39c931a4f3 not found: ID does not exist" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.908508 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd4lg\" (UniqueName: \"kubernetes.io/projected/481e045c-7203-44c6-8c95-83cadc805b1b-kube-api-access-zd4lg\") on node \"crc\" DevicePath \"\"" Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.972237 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jr6b2"] Jan 31 06:39:02 crc kubenswrapper[5050]: I0131 06:39:02.980903 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jr6b2"] Jan 31 06:39:03 crc kubenswrapper[5050]: I0131 06:39:03.746567 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481e045c-7203-44c6-8c95-83cadc805b1b" path="/var/lib/kubelet/pods/481e045c-7203-44c6-8c95-83cadc805b1b/volumes" Jan 31 06:39:06 crc kubenswrapper[5050]: I0131 06:39:06.736984 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:39:06 crc kubenswrapper[5050]: E0131 06:39:06.737652 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:39:20 crc kubenswrapper[5050]: I0131 06:39:20.736001 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:39:20 crc kubenswrapper[5050]: E0131 06:39:20.736758 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:39:35 crc kubenswrapper[5050]: I0131 06:39:35.742468 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:39:35 crc kubenswrapper[5050]: E0131 06:39:35.743563 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:39:46 crc kubenswrapper[5050]: I0131 06:39:46.736665 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:39:46 crc kubenswrapper[5050]: E0131 06:39:46.738422 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:39:58 crc kubenswrapper[5050]: I0131 06:39:58.736511 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:39:58 crc kubenswrapper[5050]: E0131 06:39:58.737331 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.464660 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-txkmv"] Jan 31 06:40:07 crc kubenswrapper[5050]: E0131 06:40:07.465831 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481e045c-7203-44c6-8c95-83cadc805b1b" containerName="extract-utilities" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.465850 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="481e045c-7203-44c6-8c95-83cadc805b1b" containerName="extract-utilities" Jan 31 06:40:07 crc kubenswrapper[5050]: E0131 06:40:07.465861 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481e045c-7203-44c6-8c95-83cadc805b1b" containerName="registry-server" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.465869 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="481e045c-7203-44c6-8c95-83cadc805b1b" containerName="registry-server" Jan 31 06:40:07 crc kubenswrapper[5050]: E0131 06:40:07.465902 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481e045c-7203-44c6-8c95-83cadc805b1b" containerName="extract-content" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.465910 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="481e045c-7203-44c6-8c95-83cadc805b1b" containerName="extract-content" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.466171 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="481e045c-7203-44c6-8c95-83cadc805b1b" containerName="registry-server" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.467859 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.477492 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txkmv"] Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.556292 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt4fx\" (UniqueName: \"kubernetes.io/projected/00fa9113-ac02-4f0d-8cb0-8376b906ca91-kube-api-access-bt4fx\") pod \"redhat-operators-txkmv\" (UID: \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\") " pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.556406 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00fa9113-ac02-4f0d-8cb0-8376b906ca91-catalog-content\") pod \"redhat-operators-txkmv\" (UID: \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\") " pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.556493 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00fa9113-ac02-4f0d-8cb0-8376b906ca91-utilities\") pod \"redhat-operators-txkmv\" (UID: \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\") " pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.657441 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00fa9113-ac02-4f0d-8cb0-8376b906ca91-utilities\") pod \"redhat-operators-txkmv\" (UID: \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\") " pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.657573 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt4fx\" (UniqueName: \"kubernetes.io/projected/00fa9113-ac02-4f0d-8cb0-8376b906ca91-kube-api-access-bt4fx\") pod \"redhat-operators-txkmv\" (UID: \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\") " pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.657640 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00fa9113-ac02-4f0d-8cb0-8376b906ca91-catalog-content\") pod \"redhat-operators-txkmv\" (UID: \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\") " pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.658079 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00fa9113-ac02-4f0d-8cb0-8376b906ca91-utilities\") pod \"redhat-operators-txkmv\" (UID: \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\") " pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.658090 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00fa9113-ac02-4f0d-8cb0-8376b906ca91-catalog-content\") pod \"redhat-operators-txkmv\" (UID: \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\") " pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.691920 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt4fx\" (UniqueName: \"kubernetes.io/projected/00fa9113-ac02-4f0d-8cb0-8376b906ca91-kube-api-access-bt4fx\") pod \"redhat-operators-txkmv\" (UID: \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\") " pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:07 crc kubenswrapper[5050]: I0131 06:40:07.787202 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:08 crc kubenswrapper[5050]: I0131 06:40:08.266699 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txkmv"] Jan 31 06:40:08 crc kubenswrapper[5050]: I0131 06:40:08.614532 5050 generic.go:334] "Generic (PLEG): container finished" podID="00fa9113-ac02-4f0d-8cb0-8376b906ca91" containerID="df5d261a892f4a6de3ed93a04c57a3f98fd56962a1e93c489d060934c58e9325" exitCode=0 Jan 31 06:40:08 crc kubenswrapper[5050]: I0131 06:40:08.614603 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txkmv" event={"ID":"00fa9113-ac02-4f0d-8cb0-8376b906ca91","Type":"ContainerDied","Data":"df5d261a892f4a6de3ed93a04c57a3f98fd56962a1e93c489d060934c58e9325"} Jan 31 06:40:08 crc kubenswrapper[5050]: I0131 06:40:08.616139 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txkmv" event={"ID":"00fa9113-ac02-4f0d-8cb0-8376b906ca91","Type":"ContainerStarted","Data":"36a178218fb1fe9cbd0bbdeaa54c36490e17f805b5b3bc5fb283e447b07ed0d6"} Jan 31 06:40:11 crc kubenswrapper[5050]: I0131 06:40:11.648070 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txkmv" event={"ID":"00fa9113-ac02-4f0d-8cb0-8376b906ca91","Type":"ContainerStarted","Data":"a8963c29fc1c8c451af1a82245dc29d807f3c08c01d8a9724cb6696b1839892c"} Jan 31 06:40:12 crc kubenswrapper[5050]: I0131 06:40:12.658941 5050 generic.go:334] "Generic (PLEG): container finished" podID="00fa9113-ac02-4f0d-8cb0-8376b906ca91" containerID="a8963c29fc1c8c451af1a82245dc29d807f3c08c01d8a9724cb6696b1839892c" exitCode=0 Jan 31 06:40:12 crc kubenswrapper[5050]: I0131 06:40:12.659054 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txkmv" event={"ID":"00fa9113-ac02-4f0d-8cb0-8376b906ca91","Type":"ContainerDied","Data":"a8963c29fc1c8c451af1a82245dc29d807f3c08c01d8a9724cb6696b1839892c"} Jan 31 06:40:12 crc kubenswrapper[5050]: I0131 06:40:12.736750 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:40:12 crc kubenswrapper[5050]: E0131 06:40:12.737133 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:40:14 crc kubenswrapper[5050]: I0131 06:40:14.683828 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txkmv" event={"ID":"00fa9113-ac02-4f0d-8cb0-8376b906ca91","Type":"ContainerStarted","Data":"48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579"} Jan 31 06:40:14 crc kubenswrapper[5050]: I0131 06:40:14.701090 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-txkmv" podStartSLOduration=2.969017458 podStartE2EDuration="7.701071707s" podCreationTimestamp="2026-01-31 06:40:07 +0000 UTC" firstStartedPulling="2026-01-31 06:40:08.617209024 +0000 UTC m=+4733.666370630" lastFinishedPulling="2026-01-31 06:40:13.349263243 +0000 UTC m=+4738.398424879" observedRunningTime="2026-01-31 06:40:14.698002764 +0000 UTC m=+4739.747164360" watchObservedRunningTime="2026-01-31 06:40:14.701071707 +0000 UTC m=+4739.750233313" Jan 31 06:40:17 crc kubenswrapper[5050]: I0131 06:40:17.788248 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:17 crc kubenswrapper[5050]: I0131 06:40:17.788598 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:18 crc kubenswrapper[5050]: I0131 06:40:18.835681 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-txkmv" podUID="00fa9113-ac02-4f0d-8cb0-8376b906ca91" containerName="registry-server" probeResult="failure" output=< Jan 31 06:40:18 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 06:40:18 crc kubenswrapper[5050]: > Jan 31 06:40:25 crc kubenswrapper[5050]: I0131 06:40:25.742336 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:40:25 crc kubenswrapper[5050]: E0131 06:40:25.743190 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:40:26 crc kubenswrapper[5050]: I0131 06:40:26.894525 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v225m"] Jan 31 06:40:26 crc kubenswrapper[5050]: I0131 06:40:26.897073 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:26 crc kubenswrapper[5050]: I0131 06:40:26.905605 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v225m"] Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.063586 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-utilities\") pod \"community-operators-v225m\" (UID: \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\") " pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.063990 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-catalog-content\") pod \"community-operators-v225m\" (UID: \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\") " pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.064137 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbqzt\" (UniqueName: \"kubernetes.io/projected/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-kube-api-access-zbqzt\") pod \"community-operators-v225m\" (UID: \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\") " pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.166901 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbqzt\" (UniqueName: \"kubernetes.io/projected/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-kube-api-access-zbqzt\") pod \"community-operators-v225m\" (UID: \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\") " pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.167169 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-utilities\") pod \"community-operators-v225m\" (UID: \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\") " pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.167246 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-catalog-content\") pod \"community-operators-v225m\" (UID: \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\") " pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.167748 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-utilities\") pod \"community-operators-v225m\" (UID: \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\") " pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.167860 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-catalog-content\") pod \"community-operators-v225m\" (UID: \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\") " pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.190032 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbqzt\" (UniqueName: \"kubernetes.io/projected/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-kube-api-access-zbqzt\") pod \"community-operators-v225m\" (UID: \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\") " pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.224029 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:27 crc kubenswrapper[5050]: W0131 06:40:27.777628 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdf1dd2d_12c2_4eb5_b6e7_f16ca73ada91.slice/crio-63d44854fa533da7e98004c88cda49a4ffc01630926b9ff537f82ca657b9fa20 WatchSource:0}: Error finding container 63d44854fa533da7e98004c88cda49a4ffc01630926b9ff537f82ca657b9fa20: Status 404 returned error can't find the container with id 63d44854fa533da7e98004c88cda49a4ffc01630926b9ff537f82ca657b9fa20 Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.777752 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v225m"] Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.839875 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.845335 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v225m" event={"ID":"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91","Type":"ContainerStarted","Data":"63d44854fa533da7e98004c88cda49a4ffc01630926b9ff537f82ca657b9fa20"} Jan 31 06:40:27 crc kubenswrapper[5050]: I0131 06:40:27.897731 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:28 crc kubenswrapper[5050]: I0131 06:40:28.855410 5050 generic.go:334] "Generic (PLEG): container finished" podID="bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91" containerID="af2d44a8c7bdeddbe8b7655a5e3cab8078d7c0f456185b750a0a997782519dea" exitCode=0 Jan 31 06:40:28 crc kubenswrapper[5050]: I0131 06:40:28.857323 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v225m" event={"ID":"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91","Type":"ContainerDied","Data":"af2d44a8c7bdeddbe8b7655a5e3cab8078d7c0f456185b750a0a997782519dea"} Jan 31 06:40:29 crc kubenswrapper[5050]: I0131 06:40:29.283647 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txkmv"] Jan 31 06:40:29 crc kubenswrapper[5050]: I0131 06:40:29.870620 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-txkmv" podUID="00fa9113-ac02-4f0d-8cb0-8376b906ca91" containerName="registry-server" containerID="cri-o://48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579" gracePeriod=2 Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.373948 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.534426 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00fa9113-ac02-4f0d-8cb0-8376b906ca91-catalog-content\") pod \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\" (UID: \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\") " Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.534656 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00fa9113-ac02-4f0d-8cb0-8376b906ca91-utilities\") pod \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\" (UID: \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\") " Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.534715 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt4fx\" (UniqueName: \"kubernetes.io/projected/00fa9113-ac02-4f0d-8cb0-8376b906ca91-kube-api-access-bt4fx\") pod \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\" (UID: \"00fa9113-ac02-4f0d-8cb0-8376b906ca91\") " Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.535789 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00fa9113-ac02-4f0d-8cb0-8376b906ca91-utilities" (OuterVolumeSpecName: "utilities") pod "00fa9113-ac02-4f0d-8cb0-8376b906ca91" (UID: "00fa9113-ac02-4f0d-8cb0-8376b906ca91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.542409 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00fa9113-ac02-4f0d-8cb0-8376b906ca91-kube-api-access-bt4fx" (OuterVolumeSpecName: "kube-api-access-bt4fx") pod "00fa9113-ac02-4f0d-8cb0-8376b906ca91" (UID: "00fa9113-ac02-4f0d-8cb0-8376b906ca91"). InnerVolumeSpecName "kube-api-access-bt4fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.638940 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00fa9113-ac02-4f0d-8cb0-8376b906ca91-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.639003 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt4fx\" (UniqueName: \"kubernetes.io/projected/00fa9113-ac02-4f0d-8cb0-8376b906ca91-kube-api-access-bt4fx\") on node \"crc\" DevicePath \"\"" Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.676090 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00fa9113-ac02-4f0d-8cb0-8376b906ca91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00fa9113-ac02-4f0d-8cb0-8376b906ca91" (UID: "00fa9113-ac02-4f0d-8cb0-8376b906ca91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.740576 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00fa9113-ac02-4f0d-8cb0-8376b906ca91-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.881001 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v225m" event={"ID":"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91","Type":"ContainerStarted","Data":"6a97e226a39da4cfbb4e719cde38947ddd03cfd0069c49860e8e70b9d91c5442"} Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.884153 5050 generic.go:334] "Generic (PLEG): container finished" podID="00fa9113-ac02-4f0d-8cb0-8376b906ca91" containerID="48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579" exitCode=0 Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.884203 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txkmv" event={"ID":"00fa9113-ac02-4f0d-8cb0-8376b906ca91","Type":"ContainerDied","Data":"48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579"} Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.884230 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txkmv" event={"ID":"00fa9113-ac02-4f0d-8cb0-8376b906ca91","Type":"ContainerDied","Data":"36a178218fb1fe9cbd0bbdeaa54c36490e17f805b5b3bc5fb283e447b07ed0d6"} Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.884255 5050 scope.go:117] "RemoveContainer" containerID="48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579" Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.884410 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txkmv" Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.920455 5050 scope.go:117] "RemoveContainer" containerID="a8963c29fc1c8c451af1a82245dc29d807f3c08c01d8a9724cb6696b1839892c" Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.937209 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txkmv"] Jan 31 06:40:30 crc kubenswrapper[5050]: I0131 06:40:30.945904 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-txkmv"] Jan 31 06:40:31 crc kubenswrapper[5050]: I0131 06:40:31.283102 5050 scope.go:117] "RemoveContainer" containerID="df5d261a892f4a6de3ed93a04c57a3f98fd56962a1e93c489d060934c58e9325" Jan 31 06:40:31 crc kubenswrapper[5050]: I0131 06:40:31.432456 5050 scope.go:117] "RemoveContainer" containerID="48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579" Jan 31 06:40:31 crc kubenswrapper[5050]: E0131 06:40:31.433279 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579\": container with ID starting with 48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579 not found: ID does not exist" containerID="48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579" Jan 31 06:40:31 crc kubenswrapper[5050]: I0131 06:40:31.433351 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579"} err="failed to get container status \"48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579\": rpc error: code = NotFound desc = could not find container \"48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579\": container with ID starting with 48f44276806f5e9cdd7bb5be1813a5ea0d66bc95045ab909131d9bb67d59c579 not found: ID does not exist" Jan 31 06:40:31 crc kubenswrapper[5050]: I0131 06:40:31.433402 5050 scope.go:117] "RemoveContainer" containerID="a8963c29fc1c8c451af1a82245dc29d807f3c08c01d8a9724cb6696b1839892c" Jan 31 06:40:31 crc kubenswrapper[5050]: E0131 06:40:31.434132 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8963c29fc1c8c451af1a82245dc29d807f3c08c01d8a9724cb6696b1839892c\": container with ID starting with a8963c29fc1c8c451af1a82245dc29d807f3c08c01d8a9724cb6696b1839892c not found: ID does not exist" containerID="a8963c29fc1c8c451af1a82245dc29d807f3c08c01d8a9724cb6696b1839892c" Jan 31 06:40:31 crc kubenswrapper[5050]: I0131 06:40:31.434201 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8963c29fc1c8c451af1a82245dc29d807f3c08c01d8a9724cb6696b1839892c"} err="failed to get container status \"a8963c29fc1c8c451af1a82245dc29d807f3c08c01d8a9724cb6696b1839892c\": rpc error: code = NotFound desc = could not find container \"a8963c29fc1c8c451af1a82245dc29d807f3c08c01d8a9724cb6696b1839892c\": container with ID starting with a8963c29fc1c8c451af1a82245dc29d807f3c08c01d8a9724cb6696b1839892c not found: ID does not exist" Jan 31 06:40:31 crc kubenswrapper[5050]: I0131 06:40:31.434247 5050 scope.go:117] "RemoveContainer" containerID="df5d261a892f4a6de3ed93a04c57a3f98fd56962a1e93c489d060934c58e9325" Jan 31 06:40:31 crc kubenswrapper[5050]: E0131 06:40:31.434877 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df5d261a892f4a6de3ed93a04c57a3f98fd56962a1e93c489d060934c58e9325\": container with ID starting with df5d261a892f4a6de3ed93a04c57a3f98fd56962a1e93c489d060934c58e9325 not found: ID does not exist" containerID="df5d261a892f4a6de3ed93a04c57a3f98fd56962a1e93c489d060934c58e9325" Jan 31 06:40:31 crc kubenswrapper[5050]: I0131 06:40:31.435054 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df5d261a892f4a6de3ed93a04c57a3f98fd56962a1e93c489d060934c58e9325"} err="failed to get container status \"df5d261a892f4a6de3ed93a04c57a3f98fd56962a1e93c489d060934c58e9325\": rpc error: code = NotFound desc = could not find container \"df5d261a892f4a6de3ed93a04c57a3f98fd56962a1e93c489d060934c58e9325\": container with ID starting with df5d261a892f4a6de3ed93a04c57a3f98fd56962a1e93c489d060934c58e9325 not found: ID does not exist" Jan 31 06:40:31 crc kubenswrapper[5050]: I0131 06:40:31.748332 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00fa9113-ac02-4f0d-8cb0-8376b906ca91" path="/var/lib/kubelet/pods/00fa9113-ac02-4f0d-8cb0-8376b906ca91/volumes" Jan 31 06:40:34 crc kubenswrapper[5050]: I0131 06:40:34.927003 5050 generic.go:334] "Generic (PLEG): container finished" podID="bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91" containerID="6a97e226a39da4cfbb4e719cde38947ddd03cfd0069c49860e8e70b9d91c5442" exitCode=0 Jan 31 06:40:34 crc kubenswrapper[5050]: I0131 06:40:34.927150 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v225m" event={"ID":"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91","Type":"ContainerDied","Data":"6a97e226a39da4cfbb4e719cde38947ddd03cfd0069c49860e8e70b9d91c5442"} Jan 31 06:40:36 crc kubenswrapper[5050]: I0131 06:40:36.959204 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v225m" event={"ID":"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91","Type":"ContainerStarted","Data":"2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448"} Jan 31 06:40:36 crc kubenswrapper[5050]: I0131 06:40:36.991529 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v225m" podStartSLOduration=4.248933649 podStartE2EDuration="10.991501582s" podCreationTimestamp="2026-01-31 06:40:26 +0000 UTC" firstStartedPulling="2026-01-31 06:40:28.858253746 +0000 UTC m=+4753.907415342" lastFinishedPulling="2026-01-31 06:40:35.600821689 +0000 UTC m=+4760.649983275" observedRunningTime="2026-01-31 06:40:36.980807413 +0000 UTC m=+4762.029969029" watchObservedRunningTime="2026-01-31 06:40:36.991501582 +0000 UTC m=+4762.040663178" Jan 31 06:40:37 crc kubenswrapper[5050]: I0131 06:40:37.224937 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:37 crc kubenswrapper[5050]: I0131 06:40:37.225016 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:38 crc kubenswrapper[5050]: I0131 06:40:38.276093 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-v225m" podUID="bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91" containerName="registry-server" probeResult="failure" output=< Jan 31 06:40:38 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Jan 31 06:40:38 crc kubenswrapper[5050]: > Jan 31 06:40:40 crc kubenswrapper[5050]: I0131 06:40:40.736519 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:40:40 crc kubenswrapper[5050]: E0131 06:40:40.737466 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:40:47 crc kubenswrapper[5050]: I0131 06:40:47.299652 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:47 crc kubenswrapper[5050]: I0131 06:40:47.351399 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:47 crc kubenswrapper[5050]: I0131 06:40:47.538449 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v225m"] Jan 31 06:40:49 crc kubenswrapper[5050]: I0131 06:40:49.072482 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v225m" podUID="bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91" containerName="registry-server" containerID="cri-o://2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448" gracePeriod=2 Jan 31 06:40:49 crc kubenswrapper[5050]: I0131 06:40:49.660450 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:49 crc kubenswrapper[5050]: I0131 06:40:49.736885 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-utilities\") pod \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\" (UID: \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\") " Jan 31 06:40:49 crc kubenswrapper[5050]: I0131 06:40:49.737351 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbqzt\" (UniqueName: \"kubernetes.io/projected/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-kube-api-access-zbqzt\") pod \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\" (UID: \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\") " Jan 31 06:40:49 crc kubenswrapper[5050]: I0131 06:40:49.737605 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-catalog-content\") pod \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\" (UID: \"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91\") " Jan 31 06:40:49 crc kubenswrapper[5050]: I0131 06:40:49.738107 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-utilities" (OuterVolumeSpecName: "utilities") pod "bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91" (UID: "bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:40:49 crc kubenswrapper[5050]: I0131 06:40:49.738425 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 06:40:49 crc kubenswrapper[5050]: I0131 06:40:49.744267 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-kube-api-access-zbqzt" (OuterVolumeSpecName: "kube-api-access-zbqzt") pod "bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91" (UID: "bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91"). InnerVolumeSpecName "kube-api-access-zbqzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 06:40:49 crc kubenswrapper[5050]: I0131 06:40:49.798094 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91" (UID: "bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 06:40:49 crc kubenswrapper[5050]: I0131 06:40:49.840698 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbqzt\" (UniqueName: \"kubernetes.io/projected/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-kube-api-access-zbqzt\") on node \"crc\" DevicePath \"\"" Jan 31 06:40:49 crc kubenswrapper[5050]: I0131 06:40:49.840732 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.104008 5050 generic.go:334] "Generic (PLEG): container finished" podID="bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91" containerID="2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448" exitCode=0 Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.104059 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v225m" event={"ID":"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91","Type":"ContainerDied","Data":"2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448"} Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.104086 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v225m" event={"ID":"bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91","Type":"ContainerDied","Data":"63d44854fa533da7e98004c88cda49a4ffc01630926b9ff537f82ca657b9fa20"} Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.104104 5050 scope.go:117] "RemoveContainer" containerID="2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448" Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.104145 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v225m" Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.134122 5050 scope.go:117] "RemoveContainer" containerID="6a97e226a39da4cfbb4e719cde38947ddd03cfd0069c49860e8e70b9d91c5442" Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.150602 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v225m"] Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.160825 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v225m"] Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.162248 5050 scope.go:117] "RemoveContainer" containerID="af2d44a8c7bdeddbe8b7655a5e3cab8078d7c0f456185b750a0a997782519dea" Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.208189 5050 scope.go:117] "RemoveContainer" containerID="2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448" Jan 31 06:40:50 crc kubenswrapper[5050]: E0131 06:40:50.208772 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448\": container with ID starting with 2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448 not found: ID does not exist" containerID="2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448" Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.208825 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448"} err="failed to get container status \"2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448\": rpc error: code = NotFound desc = could not find container \"2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448\": container with ID starting with 2b88c0ca7c1cb4ba0ef7ce1a85846164ad585c432e10356ae9f1d34024aef448 not found: ID does not exist" Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.208853 5050 scope.go:117] "RemoveContainer" containerID="6a97e226a39da4cfbb4e719cde38947ddd03cfd0069c49860e8e70b9d91c5442" Jan 31 06:40:50 crc kubenswrapper[5050]: E0131 06:40:50.209131 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a97e226a39da4cfbb4e719cde38947ddd03cfd0069c49860e8e70b9d91c5442\": container with ID starting with 6a97e226a39da4cfbb4e719cde38947ddd03cfd0069c49860e8e70b9d91c5442 not found: ID does not exist" containerID="6a97e226a39da4cfbb4e719cde38947ddd03cfd0069c49860e8e70b9d91c5442" Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.209159 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a97e226a39da4cfbb4e719cde38947ddd03cfd0069c49860e8e70b9d91c5442"} err="failed to get container status \"6a97e226a39da4cfbb4e719cde38947ddd03cfd0069c49860e8e70b9d91c5442\": rpc error: code = NotFound desc = could not find container \"6a97e226a39da4cfbb4e719cde38947ddd03cfd0069c49860e8e70b9d91c5442\": container with ID starting with 6a97e226a39da4cfbb4e719cde38947ddd03cfd0069c49860e8e70b9d91c5442 not found: ID does not exist" Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.209175 5050 scope.go:117] "RemoveContainer" containerID="af2d44a8c7bdeddbe8b7655a5e3cab8078d7c0f456185b750a0a997782519dea" Jan 31 06:40:50 crc kubenswrapper[5050]: E0131 06:40:50.209420 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af2d44a8c7bdeddbe8b7655a5e3cab8078d7c0f456185b750a0a997782519dea\": container with ID starting with af2d44a8c7bdeddbe8b7655a5e3cab8078d7c0f456185b750a0a997782519dea not found: ID does not exist" containerID="af2d44a8c7bdeddbe8b7655a5e3cab8078d7c0f456185b750a0a997782519dea" Jan 31 06:40:50 crc kubenswrapper[5050]: I0131 06:40:50.209454 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af2d44a8c7bdeddbe8b7655a5e3cab8078d7c0f456185b750a0a997782519dea"} err="failed to get container status \"af2d44a8c7bdeddbe8b7655a5e3cab8078d7c0f456185b750a0a997782519dea\": rpc error: code = NotFound desc = could not find container \"af2d44a8c7bdeddbe8b7655a5e3cab8078d7c0f456185b750a0a997782519dea\": container with ID starting with af2d44a8c7bdeddbe8b7655a5e3cab8078d7c0f456185b750a0a997782519dea not found: ID does not exist" Jan 31 06:40:51 crc kubenswrapper[5050]: I0131 06:40:51.749201 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91" path="/var/lib/kubelet/pods/bdf1dd2d-12c2-4eb5-b6e7-f16ca73ada91/volumes" Jan 31 06:40:55 crc kubenswrapper[5050]: I0131 06:40:55.744982 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:40:55 crc kubenswrapper[5050]: E0131 06:40:55.745887 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:41:06 crc kubenswrapper[5050]: I0131 06:41:06.736861 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:41:06 crc kubenswrapper[5050]: E0131 06:41:06.737789 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:41:20 crc kubenswrapper[5050]: I0131 06:41:20.737151 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:41:20 crc kubenswrapper[5050]: E0131 06:41:20.737989 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:41:33 crc kubenswrapper[5050]: I0131 06:41:33.736916 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:41:33 crc kubenswrapper[5050]: E0131 06:41:33.737882 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tbf62_openshift-machine-config-operator(5b8394e6-1648-4ba8-970b-242434354d42)\"" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" podUID="5b8394e6-1648-4ba8-970b-242434354d42" Jan 31 06:41:45 crc kubenswrapper[5050]: I0131 06:41:45.743421 5050 scope.go:117] "RemoveContainer" containerID="68f1261e1382c77157cb4875f0a9271cd98e6fca3b49ce98cc3a00ebb3835869" Jan 31 06:41:46 crc kubenswrapper[5050]: I0131 06:41:46.627457 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tbf62" event={"ID":"5b8394e6-1648-4ba8-970b-242434354d42","Type":"ContainerStarted","Data":"11dae8eb8239610b8e21a2295c1fe350d62fda28e1250a8669625f3655c005af"}